Test Report: Docker_Linux_crio_arm64 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-18:30190
                    
                

Test fail (7/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 173.66
102 TestFunctional/parallel/License 0.27
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 184.58
204 TestMultiNode/serial/PingHostFrom2Pods 4.59
225 TestRunningBinaryUpgrade 104.84
228 TestMissingContainerUpgrade 134.02
260 TestStoppedBinaryUpgrade/Upgrade 72.14
x
+
TestAddons/parallel/Ingress (173.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-579349 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-579349 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:208: (dbg) Done: kubectl --context addons-579349 replace --force -f testdata/nginx-ingress-v1.yaml: (1.014442203s)
addons_test.go:221: (dbg) Run:  kubectl --context addons-579349 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6aaa8daa-ab02-418b-8b23-7d4f4fefdd00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6aaa8daa-ab02-418b-8b23-7d4f4fefdd00] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.010517695s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-579349 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.279741801s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-579349 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-579349 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.017227518s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.046653636s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-579349 addons disable ingress --alsologtostderr -v=1: (7.786095057s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-579349
helpers_test.go:235: (dbg) docker inspect addons-579349:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e",
	        "Created": "2023-07-17T23:38:05.194171956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1807178,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:38:05.525227781Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e/hosts",
	        "LogPath": "/var/lib/docker/containers/1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e/1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e-json.log",
	        "Name": "/addons-579349",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-579349:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-579349",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/03c764454e07adf7f449848176e5c238d3236706b9ba2de0b61805c59d762a56-init/diff:/var/lib/docker/overlay2/fb8637673150b5a3287a0dca2348bba5adfe3231dd83829c5a54b472b17aad64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03c764454e07adf7f449848176e5c238d3236706b9ba2de0b61805c59d762a56/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03c764454e07adf7f449848176e5c238d3236706b9ba2de0b61805c59d762a56/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03c764454e07adf7f449848176e5c238d3236706b9ba2de0b61805c59d762a56/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-579349",
	                "Source": "/var/lib/docker/volumes/addons-579349/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-579349",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-579349",
	                "name.minikube.sigs.k8s.io": "addons-579349",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "050c1efda74a2adcd505eb7bdd313e50fecf113fdd2f400fbe3beb41df5c15a8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34663"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34662"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34659"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34661"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34660"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/050c1efda74a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-579349": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1c83b133650d",
	                        "addons-579349"
	                    ],
	                    "NetworkID": "898fb33fad87d16454fd1a61702fd57ec2a484e83c687a9206cc2a73ed54f634",
	                    "EndpointID": "0192df2af4b353c30e54616f0964e49c7ae10c11f0444d74b594a1c8cec7140a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-579349 -n addons-579349
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-579349 logs -n 25: (1.615118932s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-823972   | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |                     |
	|         | -p download-only-823972        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-823972   | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |                     |
	|         | -p download-only-823972        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC | 17 Jul 23 23:37 UTC |
	| delete  | -p download-only-823972        | download-only-823972   | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC | 17 Jul 23 23:37 UTC |
	| delete  | -p download-only-823972        | download-only-823972   | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC | 17 Jul 23 23:37 UTC |
	| start   | --download-only -p             | download-docker-897229 | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |                     |
	|         | download-docker-897229         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-897229      | download-docker-897229 | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC | 17 Jul 23 23:37 UTC |
	| start   | --download-only -p             | binary-mirror-672652   | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |                     |
	|         | binary-mirror-672652           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44623         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-672652        | binary-mirror-672652   | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC | 17 Jul 23 23:37 UTC |
	| start   | -p addons-579349               | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC | 17 Jul 23 23:40 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:40 UTC | 17 Jul 23 23:40 UTC |
	|         | addons-579349                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:40 UTC | 17 Jul 23 23:40 UTC |
	|         | -p addons-579349               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-579349 ip               | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:40 UTC | 17 Jul 23 23:40 UTC |
	| addons  | addons-579349 addons disable   | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:40 UTC | 17 Jul 23 23:40 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-579349 addons           | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:40 UTC | 17 Jul 23 23:40 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:40 UTC | 17 Jul 23 23:40 UTC |
	|         | addons-579349                  |                        |         |         |                     |                     |
	| ssh     | addons-579349 ssh curl -s      | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-579349 addons           | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:41 UTC | 17 Jul 23 23:41 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-579349 addons           | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:41 UTC | 17 Jul 23 23:41 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-579349 ip               | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:43 UTC | 17 Jul 23 23:43 UTC |
	| addons  | addons-579349 addons disable   | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:43 UTC | 17 Jul 23 23:43 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-579349 addons disable   | addons-579349          | jenkins | v1.31.0 | 17 Jul 23 23:43 UTC | 17 Jul 23 23:43 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:37:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:37:42.277060 1806717 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:37:42.277220 1806717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:37:42.277229 1806717 out.go:309] Setting ErrFile to fd 2...
	I0717 23:37:42.277235 1806717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:37:42.277513 1806717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0717 23:37:42.278108 1806717 out.go:303] Setting JSON to false
	I0717 23:37:42.279189 1806717 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30007,"bootTime":1689607056,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 23:37:42.279269 1806717 start.go:138] virtualization:  
	I0717 23:37:42.282331 1806717 out.go:177] * [addons-579349] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 23:37:42.284687 1806717 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:37:42.286069 1806717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:37:42.286254 1806717 notify.go:220] Checking for updates...
	I0717 23:37:42.288242 1806717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:37:42.290677 1806717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0717 23:37:42.292425 1806717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 23:37:42.294557 1806717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:37:42.296714 1806717 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:37:42.323251 1806717 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 23:37:42.323367 1806717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:37:42.420606 1806717 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 23:37:42.409312897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:37:42.420762 1806717 docker.go:294] overlay module found
	I0717 23:37:42.422737 1806717 out.go:177] * Using the docker driver based on user configuration
	I0717 23:37:42.424203 1806717 start.go:298] selected driver: docker
	I0717 23:37:42.424219 1806717 start.go:880] validating driver "docker" against <nil>
	I0717 23:37:42.424240 1806717 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:37:42.424910 1806717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:37:42.498862 1806717 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 23:37:42.489310168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:37:42.499050 1806717 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 23:37:42.499281 1806717 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 23:37:42.500873 1806717 out.go:177] * Using Docker driver with root privileges
	I0717 23:37:42.502483 1806717 cni.go:84] Creating CNI manager for ""
	I0717 23:37:42.502509 1806717 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:37:42.502526 1806717 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 23:37:42.502540 1806717 start_flags.go:319] config:
	{Name:addons-579349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-579349 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:37:42.504510 1806717 out.go:177] * Starting control plane node addons-579349 in cluster addons-579349
	I0717 23:37:42.505990 1806717 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 23:37:42.507810 1806717 out.go:177] * Pulling base image ...
	I0717 23:37:42.509355 1806717 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:37:42.509413 1806717 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0717 23:37:42.509425 1806717 cache.go:57] Caching tarball of preloaded images
	I0717 23:37:42.509439 1806717 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 23:37:42.509522 1806717 preload.go:174] Found /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 23:37:42.509531 1806717 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 23:37:42.509874 1806717 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/config.json ...
	I0717 23:37:42.509905 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/config.json: {Name:mk1c29e52b6def3ced67ce3efc276622ec33817b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:37:42.526777 1806717 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 23:37:42.526894 1806717 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 23:37:42.526923 1806717 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 23:37:42.526931 1806717 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 23:37:42.526939 1806717 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 23:37:42.526945 1806717 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0717 23:37:58.368863 1806717 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0717 23:37:58.368903 1806717 cache.go:195] Successfully downloaded all kic artifacts
	I0717 23:37:58.368952 1806717 start.go:365] acquiring machines lock for addons-579349: {Name:mkcf87893451912eb5f975ece187246227fce7e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:37:58.369425 1806717 start.go:369] acquired machines lock for "addons-579349" in 445.37µs
	I0717 23:37:58.369464 1806717 start.go:93] Provisioning new machine with config: &{Name:addons-579349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-579349 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:37:58.369562 1806717 start.go:125] createHost starting for "" (driver="docker")
	I0717 23:37:58.371466 1806717 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 23:37:58.371714 1806717 start.go:159] libmachine.API.Create for "addons-579349" (driver="docker")
	I0717 23:37:58.371746 1806717 client.go:168] LocalClient.Create starting
	I0717 23:37:58.371887 1806717 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem
	I0717 23:37:58.779492 1806717 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem
	I0717 23:37:58.959163 1806717 cli_runner.go:164] Run: docker network inspect addons-579349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 23:37:58.976786 1806717 cli_runner.go:211] docker network inspect addons-579349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 23:37:58.976872 1806717 network_create.go:281] running [docker network inspect addons-579349] to gather additional debugging logs...
	I0717 23:37:58.976892 1806717 cli_runner.go:164] Run: docker network inspect addons-579349
	W0717 23:37:58.995754 1806717 cli_runner.go:211] docker network inspect addons-579349 returned with exit code 1
	I0717 23:37:58.995793 1806717 network_create.go:284] error running [docker network inspect addons-579349]: docker network inspect addons-579349: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-579349 not found
	I0717 23:37:58.995819 1806717 network_create.go:286] output of [docker network inspect addons-579349]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-579349 not found
	
	** /stderr **
	I0717 23:37:58.995919 1806717 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 23:37:59.014443 1806717 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400114e820}
	I0717 23:37:59.014505 1806717 network_create.go:123] attempt to create docker network addons-579349 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 23:37:59.014567 1806717 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-579349 addons-579349
	I0717 23:37:59.087973 1806717 network_create.go:107] docker network addons-579349 192.168.49.0/24 created
	I0717 23:37:59.088004 1806717 kic.go:117] calculated static IP "192.168.49.2" for the "addons-579349" container
	I0717 23:37:59.088096 1806717 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 23:37:59.104399 1806717 cli_runner.go:164] Run: docker volume create addons-579349 --label name.minikube.sigs.k8s.io=addons-579349 --label created_by.minikube.sigs.k8s.io=true
	I0717 23:37:59.122715 1806717 oci.go:103] Successfully created a docker volume addons-579349
	I0717 23:37:59.122805 1806717 cli_runner.go:164] Run: docker run --rm --name addons-579349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579349 --entrypoint /usr/bin/test -v addons-579349:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 23:38:00.943777 1806717 cli_runner.go:217] Completed: docker run --rm --name addons-579349-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579349 --entrypoint /usr/bin/test -v addons-579349:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.82093033s)
	I0717 23:38:00.943809 1806717 oci.go:107] Successfully prepared a docker volume addons-579349
	I0717 23:38:00.943834 1806717 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:38:00.943852 1806717 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 23:38:00.943937 1806717 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-579349:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 23:38:05.109174 1806717 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-579349:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.165195116s)
	I0717 23:38:05.109209 1806717 kic.go:199] duration metric: took 4.165353 seconds to extract preloaded images to volume
	W0717 23:38:05.109346 1806717 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 23:38:05.109465 1806717 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 23:38:05.177631 1806717 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-579349 --name addons-579349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-579349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-579349 --network addons-579349 --ip 192.168.49.2 --volume addons-579349:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 23:38:05.533282 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Running}}
	I0717 23:38:05.554241 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:05.584266 1806717 cli_runner.go:164] Run: docker exec addons-579349 stat /var/lib/dpkg/alternatives/iptables
	I0717 23:38:05.694488 1806717 oci.go:144] the created container "addons-579349" has a running status.
	I0717 23:38:05.694514 1806717 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa...
	I0717 23:38:06.032689 1806717 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 23:38:06.068000 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:06.104526 1806717 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 23:38:06.104555 1806717 kic_runner.go:114] Args: [docker exec --privileged addons-579349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 23:38:06.219737 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:06.241554 1806717 machine.go:88] provisioning docker machine ...
	I0717 23:38:06.241590 1806717 ubuntu.go:169] provisioning hostname "addons-579349"
	I0717 23:38:06.241660 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:06.264273 1806717 main.go:141] libmachine: Using SSH client type: native
	I0717 23:38:06.264822 1806717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34663 <nil> <nil>}
	I0717 23:38:06.264843 1806717 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-579349 && echo "addons-579349" | sudo tee /etc/hostname
	I0717 23:38:06.487015 1806717 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-579349
	
	I0717 23:38:06.487162 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:06.508791 1806717 main.go:141] libmachine: Using SSH client type: native
	I0717 23:38:06.509225 1806717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34663 <nil> <nil>}
	I0717 23:38:06.509250 1806717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-579349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-579349/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-579349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 23:38:06.655512 1806717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 23:38:06.655539 1806717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0717 23:38:06.655562 1806717 ubuntu.go:177] setting up certificates
	I0717 23:38:06.655574 1806717 provision.go:83] configureAuth start
	I0717 23:38:06.655645 1806717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579349
	I0717 23:38:06.684947 1806717 provision.go:138] copyHostCerts
	I0717 23:38:06.685022 1806717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0717 23:38:06.685136 1806717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0717 23:38:06.685193 1806717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0717 23:38:06.685241 1806717 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.addons-579349 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-579349]
	I0717 23:38:06.919563 1806717 provision.go:172] copyRemoteCerts
	I0717 23:38:06.919673 1806717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 23:38:06.919727 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:06.937484 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:07.033434 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 23:38:07.065154 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 23:38:07.095108 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 23:38:07.124448 1806717 provision.go:86] duration metric: configureAuth took 468.857488ms
	I0717 23:38:07.124476 1806717 ubuntu.go:193] setting minikube options for container-runtime
	I0717 23:38:07.124716 1806717 config.go:182] Loaded profile config "addons-579349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:38:07.124836 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:07.142376 1806717 main.go:141] libmachine: Using SSH client type: native
	I0717 23:38:07.142998 1806717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34663 <nil> <nil>}
	I0717 23:38:07.143023 1806717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 23:38:07.390018 1806717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 23:38:07.390042 1806717 machine.go:91] provisioned docker machine in 1.148464035s
	I0717 23:38:07.390052 1806717 client.go:171] LocalClient.Create took 9.018296527s
	I0717 23:38:07.390063 1806717 start.go:167] duration metric: libmachine.API.Create for "addons-579349" took 9.01835018s
	I0717 23:38:07.390071 1806717 start.go:300] post-start starting for "addons-579349" (driver="docker")
	I0717 23:38:07.390080 1806717 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 23:38:07.390155 1806717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 23:38:07.390208 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:07.408369 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:07.505647 1806717 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 23:38:07.509870 1806717 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 23:38:07.509907 1806717 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 23:38:07.509920 1806717 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 23:38:07.509927 1806717 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 23:38:07.509937 1806717 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0717 23:38:07.510007 1806717 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0717 23:38:07.510035 1806717 start.go:303] post-start completed in 119.958863ms
	I0717 23:38:07.510350 1806717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579349
	I0717 23:38:07.529160 1806717 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/config.json ...
	I0717 23:38:07.529435 1806717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 23:38:07.529488 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:07.547004 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:07.636622 1806717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 23:38:07.642352 1806717 start.go:128] duration metric: createHost completed in 9.272772098s
	I0717 23:38:07.642377 1806717 start.go:83] releasing machines lock for "addons-579349", held for 9.272934894s
	I0717 23:38:07.642466 1806717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-579349
	I0717 23:38:07.660030 1806717 ssh_runner.go:195] Run: cat /version.json
	I0717 23:38:07.660056 1806717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 23:38:07.660083 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:07.660115 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:07.685962 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:07.686596 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:07.912779 1806717 ssh_runner.go:195] Run: systemctl --version
	I0717 23:38:07.918259 1806717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 23:38:08.065738 1806717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 23:38:08.072541 1806717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 23:38:08.100260 1806717 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 23:38:08.100342 1806717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 23:38:08.138622 1806717 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 23:38:08.138647 1806717 start.go:466] detecting cgroup driver to use...
	I0717 23:38:08.138677 1806717 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 23:38:08.138728 1806717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 23:38:08.156778 1806717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 23:38:08.170201 1806717 docker.go:196] disabling cri-docker service (if available) ...
	I0717 23:38:08.170332 1806717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 23:38:08.186495 1806717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 23:38:08.203277 1806717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 23:38:08.308010 1806717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 23:38:08.409246 1806717 docker.go:212] disabling docker service ...
	I0717 23:38:08.409325 1806717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 23:38:08.432532 1806717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 23:38:08.447280 1806717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 23:38:08.543258 1806717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 23:38:08.662753 1806717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 23:38:08.677326 1806717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 23:38:08.698999 1806717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 23:38:08.699067 1806717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:38:08.711936 1806717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 23:38:08.712007 1806717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:38:08.725130 1806717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:38:08.738074 1806717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:38:08.750667 1806717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 23:38:08.762075 1806717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 23:38:08.773018 1806717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 23:38:08.783524 1806717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 23:38:08.886500 1806717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 23:38:09.014733 1806717 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 23:38:09.014879 1806717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 23:38:09.020395 1806717 start.go:534] Will wait 60s for crictl version
	I0717 23:38:09.020475 1806717 ssh_runner.go:195] Run: which crictl
	I0717 23:38:09.025674 1806717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 23:38:09.074206 1806717 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 23:38:09.074369 1806717 ssh_runner.go:195] Run: crio --version
	I0717 23:38:09.121725 1806717 ssh_runner.go:195] Run: crio --version
	I0717 23:38:09.173820 1806717 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 23:38:09.175423 1806717 cli_runner.go:164] Run: docker network inspect addons-579349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 23:38:09.192483 1806717 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 23:38:09.197110 1806717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 23:38:09.210167 1806717 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:38:09.210235 1806717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 23:38:09.272130 1806717 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 23:38:09.272152 1806717 crio.go:415] Images already preloaded, skipping extraction
	I0717 23:38:09.272207 1806717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 23:38:09.312642 1806717 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 23:38:09.312663 1806717 cache_images.go:84] Images are preloaded, skipping loading
	I0717 23:38:09.312738 1806717 ssh_runner.go:195] Run: crio config
	I0717 23:38:09.372446 1806717 cni.go:84] Creating CNI manager for ""
	I0717 23:38:09.372469 1806717 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:38:09.372488 1806717 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 23:38:09.372506 1806717 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-579349 NodeName:addons-579349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 23:38:09.372661 1806717 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-579349"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 23:38:09.372745 1806717 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-579349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-579349 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 23:38:09.372815 1806717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 23:38:09.383573 1806717 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 23:38:09.383683 1806717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 23:38:09.394311 1806717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0717 23:38:09.416177 1806717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 23:38:09.438527 1806717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0717 23:38:09.460069 1806717 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 23:38:09.464610 1806717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 23:38:09.478640 1806717 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349 for IP: 192.168.49.2
	I0717 23:38:09.478723 1806717 certs.go:190] acquiring lock for shared ca certs: {Name:mkb76b85951e1a7e4a78939a9bc1392aa19273b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:09.478909 1806717 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key
	I0717 23:38:10.028599 1806717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt ...
	I0717 23:38:10.028636 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt: {Name:mkfb807c7639ae7b7141aa32e271de2e6c613bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:10.028888 1806717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key ...
	I0717 23:38:10.028907 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key: {Name:mkb80fba46e15c16b67dd019e440661addf99c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:10.028999 1806717 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key
	I0717 23:38:11.121093 1806717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt ...
	I0717 23:38:11.121125 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt: {Name:mk849534e5805ed427ddf1a420632e1f62d52055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:11.121311 1806717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key ...
	I0717 23:38:11.121323 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key: {Name:mk576647c019d0d121ed4b504c3e674ab50e6654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:11.122007 1806717 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.key
	I0717 23:38:11.122048 1806717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt with IP's: []
	I0717 23:38:12.391842 1806717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt ...
	I0717 23:38:12.391887 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: {Name:mk94158b8fd68751f97d3f2a9a898df2b9f22b93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:12.392144 1806717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.key ...
	I0717 23:38:12.392161 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.key: {Name:mke6f78316ed3f8e135c10c8b755118937061e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:12.392818 1806717 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.key.dd3b5fb2
	I0717 23:38:12.392873 1806717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 23:38:12.851563 1806717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.crt.dd3b5fb2 ...
	I0717 23:38:12.851599 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.crt.dd3b5fb2: {Name:mkeadafdabddef646f1db97c25f1c8648d708044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:12.851851 1806717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.key.dd3b5fb2 ...
	I0717 23:38:12.851870 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.key.dd3b5fb2: {Name:mkaf97cae355b2e9b1376745e4ef712f264feada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:12.852390 1806717 certs.go:337] copying /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.crt
	I0717 23:38:12.852480 1806717 certs.go:341] copying /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.key
	I0717 23:38:12.852531 1806717 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.key
	I0717 23:38:12.852552 1806717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.crt with IP's: []
	I0717 23:38:13.245192 1806717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.crt ...
	I0717 23:38:13.245224 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.crt: {Name:mk6c565cd3463aa580644999225f968319c26af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:13.245421 1806717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.key ...
	I0717 23:38:13.245435 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.key: {Name:mk5d3b47c5db4e9d5686c7c5c0362dd1ba971ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:13.246163 1806717 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 23:38:13.246212 1806717 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem (1082 bytes)
	I0717 23:38:13.246245 1806717 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem (1123 bytes)
	I0717 23:38:13.246276 1806717 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem (1675 bytes)
	I0717 23:38:13.246894 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 23:38:13.279837 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 23:38:13.310182 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 23:38:13.341432 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 23:38:13.369944 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 23:38:13.398230 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 23:38:13.426782 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 23:38:13.456889 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 23:38:13.485768 1806717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 23:38:13.518130 1806717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 23:38:13.540326 1806717 ssh_runner.go:195] Run: openssl version
	I0717 23:38:13.548029 1806717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 23:38:13.559908 1806717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:38:13.564697 1806717 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:38:13.564763 1806717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:38:13.573606 1806717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 23:38:13.585531 1806717 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 23:38:13.589880 1806717 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 23:38:13.589947 1806717 kubeadm.go:404] StartCluster: {Name:addons-579349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-579349 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:38:13.590045 1806717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 23:38:13.590105 1806717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 23:38:13.633517 1806717 cri.go:89] found id: ""
	I0717 23:38:13.633625 1806717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 23:38:13.644487 1806717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 23:38:13.655277 1806717 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 23:38:13.655374 1806717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 23:38:13.666076 1806717 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 23:38:13.666118 1806717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 23:38:13.767512 1806717 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 23:38:13.851211 1806717 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 23:38:29.193000 1806717 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 23:38:29.193053 1806717 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 23:38:29.193136 1806717 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 23:38:29.193188 1806717 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 23:38:29.193221 1806717 kubeadm.go:322] OS: Linux
	I0717 23:38:29.193270 1806717 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 23:38:29.193317 1806717 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 23:38:29.193362 1806717 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 23:38:29.193408 1806717 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 23:38:29.193453 1806717 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 23:38:29.193500 1806717 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 23:38:29.193543 1806717 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 23:38:29.193590 1806717 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 23:38:29.193633 1806717 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 23:38:29.193700 1806717 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 23:38:29.193791 1806717 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 23:38:29.193877 1806717 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 23:38:29.193936 1806717 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 23:38:29.197118 1806717 out.go:204]   - Generating certificates and keys ...
	I0717 23:38:29.197205 1806717 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 23:38:29.197267 1806717 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 23:38:29.197333 1806717 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 23:38:29.197387 1806717 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 23:38:29.197448 1806717 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 23:38:29.197496 1806717 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 23:38:29.197546 1806717 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 23:38:29.197658 1806717 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-579349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 23:38:29.197708 1806717 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 23:38:29.197816 1806717 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-579349 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 23:38:29.197878 1806717 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 23:38:29.197938 1806717 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 23:38:29.197980 1806717 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 23:38:29.198033 1806717 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 23:38:29.198083 1806717 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 23:38:29.198133 1806717 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 23:38:29.198194 1806717 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 23:38:29.198246 1806717 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 23:38:29.198343 1806717 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 23:38:29.198448 1806717 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 23:38:29.198486 1806717 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 23:38:29.198549 1806717 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 23:38:29.200656 1806717 out.go:204]   - Booting up control plane ...
	I0717 23:38:29.200835 1806717 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 23:38:29.200957 1806717 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 23:38:29.201053 1806717 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 23:38:29.201182 1806717 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 23:38:29.201377 1806717 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 23:38:29.201502 1806717 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502099 seconds
	I0717 23:38:29.201653 1806717 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 23:38:29.201783 1806717 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 23:38:29.201846 1806717 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 23:38:29.202030 1806717 kubeadm.go:322] [mark-control-plane] Marking the node addons-579349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 23:38:29.202088 1806717 kubeadm.go:322] [bootstrap-token] Using token: z83pt4.mgbb81fxwdb2ryi7
	I0717 23:38:29.204279 1806717 out.go:204]   - Configuring RBAC rules ...
	I0717 23:38:29.204386 1806717 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 23:38:29.204466 1806717 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 23:38:29.204609 1806717 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 23:38:29.204730 1806717 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 23:38:29.204839 1806717 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 23:38:29.204936 1806717 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 23:38:29.205044 1806717 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 23:38:29.205085 1806717 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 23:38:29.205129 1806717 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 23:38:29.205134 1806717 kubeadm.go:322] 
	I0717 23:38:29.205191 1806717 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 23:38:29.205195 1806717 kubeadm.go:322] 
	I0717 23:38:29.205271 1806717 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 23:38:29.205275 1806717 kubeadm.go:322] 
	I0717 23:38:29.205300 1806717 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 23:38:29.205355 1806717 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 23:38:29.205403 1806717 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 23:38:29.205407 1806717 kubeadm.go:322] 
	I0717 23:38:29.205458 1806717 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 23:38:29.205462 1806717 kubeadm.go:322] 
	I0717 23:38:29.205507 1806717 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 23:38:29.205511 1806717 kubeadm.go:322] 
	I0717 23:38:29.205561 1806717 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 23:38:29.205633 1806717 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 23:38:29.205698 1806717 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 23:38:29.205702 1806717 kubeadm.go:322] 
	I0717 23:38:29.205782 1806717 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 23:38:29.205855 1806717 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 23:38:29.205858 1806717 kubeadm.go:322] 
	I0717 23:38:29.205939 1806717 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z83pt4.mgbb81fxwdb2ryi7 \
	I0717 23:38:29.206046 1806717 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f \
	I0717 23:38:29.206068 1806717 kubeadm.go:322] 	--control-plane 
	I0717 23:38:29.206072 1806717 kubeadm.go:322] 
	I0717 23:38:29.206152 1806717 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 23:38:29.206157 1806717 kubeadm.go:322] 
	I0717 23:38:29.206234 1806717 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z83pt4.mgbb81fxwdb2ryi7 \
	I0717 23:38:29.206338 1806717 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f 
	I0717 23:38:29.206347 1806717 cni.go:84] Creating CNI manager for ""
	I0717 23:38:29.206356 1806717 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:38:29.209697 1806717 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 23:38:29.211689 1806717 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 23:38:29.218057 1806717 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 23:38:29.218076 1806717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 23:38:29.271177 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 23:38:30.181967 1806717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 23:38:30.182104 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:30.182193 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=addons-579349 minikube.k8s.io/updated_at=2023_07_17T23_38_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:30.204002 1806717 ops.go:34] apiserver oom_adj: -16
	I0717 23:38:30.364598 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:30.981180 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:31.481090 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:31.980775 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:32.481565 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:32.980602 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:33.480726 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:33.981411 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:34.480867 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:34.981298 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:35.481470 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:35.981593 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:36.480633 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:36.980611 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:37.481320 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:37.981482 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:38.480697 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:38.980961 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:39.480703 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:39.981119 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:40.481577 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:40.980658 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:41.480694 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:41.980881 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:42.480891 1806717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:38:42.607654 1806717 kubeadm.go:1081] duration metric: took 12.425598943s to wait for elevateKubeSystemPrivileges.
	I0717 23:38:42.607684 1806717 kubeadm.go:406] StartCluster complete in 29.017760006s
	I0717 23:38:42.607704 1806717 settings.go:142] acquiring lock: {Name:mk74b5b544da6acf33d2b75c01a65c483577bcd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:42.607821 1806717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:38:42.608215 1806717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/kubeconfig: {Name:mkabbac053a2a3ee682ab9031f228204945b972c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:38:42.611545 1806717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 23:38:42.611930 1806717 config.go:182] Loaded profile config "addons-579349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:38:42.612080 1806717 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 23:38:42.612488 1806717 addons.go:69] Setting volumesnapshots=true in profile "addons-579349"
	I0717 23:38:42.612506 1806717 addons.go:231] Setting addon volumesnapshots=true in "addons-579349"
	I0717 23:38:42.612570 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.613634 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.616207 1806717 addons.go:69] Setting cloud-spanner=true in profile "addons-579349"
	I0717 23:38:42.616245 1806717 addons.go:231] Setting addon cloud-spanner=true in "addons-579349"
	I0717 23:38:42.616300 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.616837 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.617082 1806717 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-579349"
	I0717 23:38:42.617133 1806717 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-579349"
	I0717 23:38:42.617172 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.617671 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.621262 1806717 addons.go:69] Setting default-storageclass=true in profile "addons-579349"
	I0717 23:38:42.621298 1806717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-579349"
	I0717 23:38:42.621681 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.625398 1806717 addons.go:69] Setting gcp-auth=true in profile "addons-579349"
	I0717 23:38:42.625438 1806717 mustload.go:65] Loading cluster: addons-579349
	I0717 23:38:42.625666 1806717 config.go:182] Loaded profile config "addons-579349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:38:42.625926 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.644950 1806717 addons.go:69] Setting ingress=true in profile "addons-579349"
	I0717 23:38:42.644988 1806717 addons.go:231] Setting addon ingress=true in "addons-579349"
	I0717 23:38:42.645054 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.645500 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.661120 1806717 addons.go:69] Setting metrics-server=true in profile "addons-579349"
	I0717 23:38:42.661155 1806717 addons.go:231] Setting addon metrics-server=true in "addons-579349"
	I0717 23:38:42.661201 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.662090 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.662191 1806717 addons.go:69] Setting ingress-dns=true in profile "addons-579349"
	I0717 23:38:42.662205 1806717 addons.go:231] Setting addon ingress-dns=true in "addons-579349"
	I0717 23:38:42.662246 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.662790 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.662920 1806717 addons.go:69] Setting inspektor-gadget=true in profile "addons-579349"
	I0717 23:38:42.662952 1806717 addons.go:231] Setting addon inspektor-gadget=true in "addons-579349"
	I0717 23:38:42.663111 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.663162 1806717 addons.go:69] Setting registry=true in profile "addons-579349"
	I0717 23:38:42.663192 1806717 addons.go:231] Setting addon registry=true in "addons-579349"
	I0717 23:38:42.663238 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.663641 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.663826 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.670467 1806717 addons.go:69] Setting storage-provisioner=true in profile "addons-579349"
	I0717 23:38:42.671327 1806717 addons.go:231] Setting addon storage-provisioner=true in "addons-579349"
	I0717 23:38:42.674971 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.675542 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.714611 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 23:38:42.748474 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 23:38:42.758310 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 23:38:42.775690 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 23:38:42.777814 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 23:38:42.783458 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 23:38:42.786526 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 23:38:42.789746 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 23:38:42.794460 1806717 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 23:38:42.794667 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 23:38:42.800668 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 23:38:42.803048 1806717 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 23:38:42.800631 1806717 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 23:38:42.800737 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.806451 1806717 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 23:38:42.806467 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 23:38:42.806531 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.806626 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 23:38:42.806659 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.815800 1806717 addons.go:231] Setting addon default-storageclass=true in "addons-579349"
	I0717 23:38:42.815843 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.816279 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:42.876354 1806717 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 23:38:42.876289 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:42.887051 1806717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 23:38:42.881682 1806717 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 23:38:42.890646 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 23:38:42.890731 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.894664 1806717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 23:38:42.899345 1806717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 23:38:42.901865 1806717 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 23:38:42.901886 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 23:38:42.901947 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.920935 1806717 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 23:38:42.928750 1806717 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 23:38:42.932089 1806717 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 23:38:42.932112 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 23:38:42.932178 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.934752 1806717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:38:42.939752 1806717 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 23:38:42.939774 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 23:38:42.939855 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.942454 1806717 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 23:38:42.944574 1806717 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 23:38:42.944596 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 23:38:42.944667 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:42.948682 1806717 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 23:38:42.950508 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 23:38:42.950528 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 23:38:42.950597 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:43.021568 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.057974 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.062315 1806717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 23:38:43.082843 1806717 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 23:38:43.082864 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 23:38:43.082930 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:43.093435 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.156763 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.170948 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.187892 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.188773 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.205200 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.207899 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.227888 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:43.312675 1806717 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-579349" context rescaled to 1 replicas
	I0717 23:38:43.312772 1806717 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:38:43.319148 1806717 out.go:177] * Verifying Kubernetes components...
	I0717 23:38:43.321434 1806717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:38:43.450120 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 23:38:43.450202 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 23:38:43.462275 1806717 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 23:38:43.462300 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 23:38:43.508285 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 23:38:43.545178 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 23:38:43.613370 1806717 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 23:38:43.613389 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 23:38:43.645986 1806717 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 23:38:43.646007 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 23:38:43.652734 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 23:38:43.652805 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 23:38:43.689922 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 23:38:43.689993 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 23:38:43.701869 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 23:38:43.726055 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 23:38:43.728876 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 23:38:43.743924 1806717 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 23:38:43.743949 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 23:38:43.807685 1806717 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 23:38:43.807756 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 23:38:43.810593 1806717 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 23:38:43.810658 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 23:38:43.827092 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 23:38:43.827165 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 23:38:43.828311 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 23:38:43.828370 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 23:38:43.905776 1806717 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 23:38:43.905847 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 23:38:43.971012 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 23:38:43.971037 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 23:38:43.988840 1806717 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 23:38:43.988864 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 23:38:44.012560 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 23:38:44.012584 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 23:38:44.015193 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 23:38:44.015216 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 23:38:44.075206 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 23:38:44.121988 1806717 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 23:38:44.122063 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 23:38:44.132731 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 23:38:44.132789 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 23:38:44.179934 1806717 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 23:38:44.180019 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 23:38:44.180643 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 23:38:44.266532 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 23:38:44.316459 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 23:38:44.316528 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 23:38:44.399766 1806717 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 23:38:44.399834 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 23:38:44.537508 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 23:38:44.537580 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 23:38:44.580415 1806717 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 23:38:44.580475 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 23:38:44.702124 1806717 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 23:38:44.702182 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 23:38:44.735110 1806717 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 23:38:44.735178 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 23:38:44.893615 1806717 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 23:38:44.893682 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 23:38:44.905079 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 23:38:44.950172 1806717 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 23:38:44.950240 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 23:38:44.994667 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 23:38:45.428863 1806717 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.366514037s)
	I0717 23:38:45.428895 1806717 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 23:38:45.428935 1806717 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.107420921s)
	I0717 23:38:45.429779 1806717 node_ready.go:35] waiting up to 6m0s for node "addons-579349" to be "Ready" ...
	I0717 23:38:46.417663 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.909297626s)
	I0717 23:38:46.417715 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.872516662s)
	I0717 23:38:47.638348 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:38:47.697151 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.995249196s)
	I0717 23:38:47.697220 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.971107415s)
	I0717 23:38:48.419766 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.344533883s)
	I0717 23:38:48.420215 1806717 addons.go:467] Verifying addon registry=true in "addons-579349"
	I0717 23:38:48.422085 1806717 out.go:177] * Verifying registry addon...
	I0717 23:38:48.420366 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.239195386s)
	I0717 23:38:48.422216 1806717 addons.go:467] Verifying addon metrics-server=true in "addons-579349"
	I0717 23:38:48.420007 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.153399897s)
	I0717 23:38:48.420061 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.514909394s)
	I0717 23:38:48.419696 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.690740949s)
	I0717 23:38:48.422427 1806717 addons.go:467] Verifying addon ingress=true in "addons-579349"
	I0717 23:38:48.424387 1806717 out.go:177] * Verifying ingress addon...
	W0717 23:38:48.422554 1806717 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 23:38:48.429954 1806717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 23:38:48.426717 1806717 retry.go:31] will retry after 288.137671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 23:38:48.427569 1806717 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 23:38:48.448764 1806717 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 23:38:48.449993 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:48.460862 1806717 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 23:38:48.460935 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:48.718554 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 23:38:48.789325 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.794561279s)
	I0717 23:38:48.789405 1806717 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-579349"
	I0717 23:38:48.792453 1806717 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 23:38:48.795399 1806717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 23:38:48.806574 1806717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 23:38:48.806646 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:48.954931 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:48.965397 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:49.312136 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:49.461790 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:49.473898 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:49.704014 1806717 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 23:38:49.704203 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:49.744388 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:49.813442 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:49.961047 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:49.972536 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:49.995382 1806717 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 23:38:50.056964 1806717 addons.go:231] Setting addon gcp-auth=true in "addons-579349"
	I0717 23:38:50.057055 1806717 host.go:66] Checking if "addons-579349" exists ...
	I0717 23:38:50.057570 1806717 cli_runner.go:164] Run: docker container inspect addons-579349 --format={{.State.Status}}
	I0717 23:38:50.091892 1806717 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 23:38:50.091948 1806717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-579349
	I0717 23:38:50.101480 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:38:50.136178 1806717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34663 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/addons-579349/id_rsa Username:docker}
	I0717 23:38:50.328759 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:50.460082 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:50.506387 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:50.688735 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.970085953s)
	I0717 23:38:50.691448 1806717 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 23:38:50.693271 1806717 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 23:38:50.694971 1806717 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 23:38:50.694996 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 23:38:50.760727 1806717 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 23:38:50.760753 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 23:38:50.825239 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:50.833454 1806717 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 23:38:50.833515 1806717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 23:38:50.908531 1806717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 23:38:50.956768 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:50.966952 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:51.316918 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:51.463043 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:51.472364 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:51.873662 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:52.004864 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:52.041020 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:52.103345 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:38:52.321255 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:52.469217 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:52.479037 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:52.812195 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:52.955211 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:52.965994 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:53.337444 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:53.492793 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:53.496829 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:53.617052 1806717 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.708453476s)
	I0717 23:38:53.618752 1806717 addons.go:467] Verifying addon gcp-auth=true in "addons-579349"
	I0717 23:38:53.622616 1806717 out.go:177] * Verifying gcp-auth addon...
	I0717 23:38:53.625535 1806717 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 23:38:53.693706 1806717 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 23:38:53.693769 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:53.811690 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:53.955416 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:53.978583 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:54.197622 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:54.312361 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:54.455659 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:54.467071 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:54.601683 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:38:54.698231 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:54.811600 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:54.955548 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:54.968281 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:55.199015 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:55.311032 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:55.458294 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:55.466152 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:55.698036 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:55.812622 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:55.955186 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:55.967971 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:56.198926 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:56.311719 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:56.463055 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:56.467097 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:56.602637 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:38:56.699055 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:56.811491 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:56.955680 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:56.967343 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:57.198843 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:57.311889 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:57.454201 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:57.465829 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:57.703399 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:57.812993 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:57.957506 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:57.971329 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:58.204563 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:58.312415 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:58.456058 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:58.467625 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:58.699123 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:58.811645 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:58.955559 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:58.968444 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:59.101203 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:38:59.203145 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:59.314841 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:59.455702 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:59.466505 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:38:59.698203 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:38:59.812565 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:38:59.954265 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:38:59.965583 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:00.200488 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:00.314248 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:00.455965 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:00.466750 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:00.698252 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:00.812003 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:00.957898 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:00.966870 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:01.101330 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:01.197647 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:01.311876 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:01.454426 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:01.466828 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:01.698154 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:01.811832 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:01.954744 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:01.966116 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:02.198130 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:02.311176 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:02.455195 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:02.465278 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:02.698048 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:02.810834 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:02.955165 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:02.965249 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:03.103114 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:03.198093 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:03.311798 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:03.455193 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:03.465235 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:03.698002 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:03.811404 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:03.955170 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:03.965307 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:04.198139 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:04.311794 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:04.454685 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:04.466762 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:04.697661 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:04.811741 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:04.954520 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:04.965816 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:05.198287 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:05.311446 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:05.455467 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:05.465825 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:05.601287 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:05.697661 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:05.811741 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:05.954567 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:05.965486 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:06.197731 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:06.311267 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:06.454950 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:06.464887 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:06.697553 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:06.811448 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:06.954281 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:06.965356 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:07.197616 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:07.311827 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:07.454302 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:07.465301 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:07.601398 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:07.697956 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:07.811002 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:07.954816 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:07.964619 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:08.198204 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:08.312238 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:08.454583 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:08.465731 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:08.697514 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:08.810842 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:08.954623 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:08.965542 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:09.197733 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:09.312272 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:09.454762 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:09.465574 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:09.698167 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:09.811149 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:09.954724 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:09.965673 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:10.100480 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:10.197730 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:10.311129 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:10.455029 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:10.465143 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:10.698745 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:10.812650 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:10.955676 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:10.965819 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:11.197898 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:11.311497 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:11.455616 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:11.466105 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:11.697809 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:11.811695 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:11.954635 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:11.966205 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:12.100918 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:12.197535 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:12.312022 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:12.455525 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:12.468499 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:12.697469 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:12.812533 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:12.954982 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:12.965090 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:13.199175 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:13.312017 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:13.454564 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:13.465704 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:13.698112 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:13.811969 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:13.954343 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:13.965303 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:14.198132 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:14.311155 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:14.455105 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:14.465690 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:14.600908 1806717 node_ready.go:58] node "addons-579349" has status "Ready":"False"
	I0717 23:39:14.698059 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:14.811094 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:14.954903 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:14.964900 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:15.129295 1806717 node_ready.go:49] node "addons-579349" has status "Ready":"True"
	I0717 23:39:15.129707 1806717 node_ready.go:38] duration metric: took 29.699891785s waiting for node "addons-579349" to be "Ready" ...
	I0717 23:39:15.129754 1806717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:39:15.165844 1806717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xr68f" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:15.204629 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:15.313844 1806717 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 23:39:15.313917 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:15.523719 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:15.524411 1806717 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 23:39:15.524455 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:15.699465 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:15.816411 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:15.962499 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:15.968302 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:16.201069 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:16.320319 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:16.461366 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:16.475182 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:16.701923 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:16.818507 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:16.955070 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:16.966481 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:17.202340 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:17.218490 1806717 pod_ready.go:92] pod "coredns-5d78c9869d-xr68f" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:17.218553 1806717 pod_ready.go:81] duration metric: took 2.052602301s waiting for pod "coredns-5d78c9869d-xr68f" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.218589 1806717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.226059 1806717 pod_ready.go:92] pod "etcd-addons-579349" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:17.226127 1806717 pod_ready.go:81] duration metric: took 7.518448ms waiting for pod "etcd-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.226156 1806717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.237736 1806717 pod_ready.go:92] pod "kube-apiserver-addons-579349" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:17.237799 1806717 pod_ready.go:81] duration metric: took 11.622382ms waiting for pod "kube-apiserver-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.237824 1806717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.256266 1806717 pod_ready.go:92] pod "kube-controller-manager-addons-579349" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:17.256340 1806717 pod_ready.go:81] duration metric: took 18.496528ms waiting for pod "kube-controller-manager-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.256369 1806717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t2m8c" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.278567 1806717 pod_ready.go:92] pod "kube-proxy-t2m8c" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:17.280397 1806717 pod_ready.go:81] duration metric: took 23.998012ms waiting for pod "kube-proxy-t2m8c" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.280431 1806717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.316352 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:17.455308 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:17.465563 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:17.610059 1806717 pod_ready.go:92] pod "kube-scheduler-addons-579349" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:17.610082 1806717 pod_ready.go:81] duration metric: took 329.619277ms waiting for pod "kube-scheduler-addons-579349" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.610095 1806717 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-4m7tc" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:17.698109 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:17.812802 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:17.955679 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:17.966532 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:18.198301 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:18.314100 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:18.455918 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:18.468049 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:18.698903 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:18.813568 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:18.955142 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:18.966792 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:19.198474 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:19.314288 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:19.455901 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:19.466223 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:19.698917 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:19.814200 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:19.955454 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:19.966182 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:20.018908 1806717 pod_ready.go:102] pod "metrics-server-844d8db974-4m7tc" in "kube-system" namespace has status "Ready":"False"
	I0717 23:39:20.198096 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:20.313667 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:20.455280 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:20.466233 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:20.697745 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:20.824034 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:20.971148 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:20.980615 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:21.197936 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:21.314128 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:21.455292 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:21.466617 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:21.698028 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:21.814068 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:21.955194 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:21.966762 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:22.020202 1806717 pod_ready.go:102] pod "metrics-server-844d8db974-4m7tc" in "kube-system" namespace has status "Ready":"False"
	I0717 23:39:22.198214 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:22.314785 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:22.454762 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:22.468586 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:22.700559 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:22.814119 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:22.955806 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:22.966938 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:23.198655 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:23.317076 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:23.489631 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:23.497353 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:23.697538 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:23.818395 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:23.956555 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:23.966819 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:24.026243 1806717 pod_ready.go:102] pod "metrics-server-844d8db974-4m7tc" in "kube-system" namespace has status "Ready":"False"
	I0717 23:39:24.200217 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:24.315059 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:24.457999 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:24.467038 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:24.698552 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:24.836494 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:24.958239 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:24.968858 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:25.200977 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:25.315413 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:25.457325 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:25.466771 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:25.700713 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:25.820225 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:25.979382 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:25.985644 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:26.034524 1806717 pod_ready.go:92] pod "metrics-server-844d8db974-4m7tc" in "kube-system" namespace has status "Ready":"True"
	I0717 23:39:26.034596 1806717 pod_ready.go:81] duration metric: took 8.424483481s waiting for pod "metrics-server-844d8db974-4m7tc" in "kube-system" namespace to be "Ready" ...
	I0717 23:39:26.034634 1806717 pod_ready.go:38] duration metric: took 10.904849568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:39:26.034685 1806717 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:39:26.034779 1806717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:39:26.077313 1806717 api_server.go:72] duration metric: took 42.764496149s to wait for apiserver process to appear ...
	I0717 23:39:26.077403 1806717 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:39:26.077435 1806717 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 23:39:26.098244 1806717 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 23:39:26.111792 1806717 api_server.go:141] control plane version: v1.27.3
	I0717 23:39:26.111864 1806717 api_server.go:131] duration metric: took 34.439622ms to wait for apiserver health ...
	I0717 23:39:26.111888 1806717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:39:26.134900 1806717 system_pods.go:59] 17 kube-system pods found
	I0717 23:39:26.134992 1806717 system_pods.go:61] "coredns-5d78c9869d-xr68f" [94652c88-140d-44d8-8d23-0a1f75e65abc] Running
	I0717 23:39:26.135020 1806717 system_pods.go:61] "csi-hostpath-attacher-0" [f9dcd48d-5bb8-493f-a448-dac0cf3f8ee5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 23:39:26.135063 1806717 system_pods.go:61] "csi-hostpath-resizer-0" [4b5b14c3-d4a7-4ec8-a7b1-8f130698b31a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 23:39:26.135091 1806717 system_pods.go:61] "csi-hostpathplugin-594b5" [19607f31-c4bc-4b9b-9789-c1b4328c24b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 23:39:26.135114 1806717 system_pods.go:61] "etcd-addons-579349" [665ee5d4-86df-4762-97b5-5ebc1540ef2a] Running
	I0717 23:39:26.135137 1806717 system_pods.go:61] "kindnet-nlpqh" [bb38a46d-a568-4adc-87f0-16744aba40aa] Running
	I0717 23:39:26.135169 1806717 system_pods.go:61] "kube-apiserver-addons-579349" [13795444-8760-4195-ab57-e5c415e48c67] Running
	I0717 23:39:26.135196 1806717 system_pods.go:61] "kube-controller-manager-addons-579349" [5267a6f0-a192-489f-9d39-bb56b9985ab2] Running
	I0717 23:39:26.135221 1806717 system_pods.go:61] "kube-ingress-dns-minikube" [c1500102-8d64-4ca7-b348-9dea02fb4cc6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 23:39:26.135242 1806717 system_pods.go:61] "kube-proxy-t2m8c" [e3f8be82-028c-4963-977e-2149a6a0d092] Running
	I0717 23:39:26.135273 1806717 system_pods.go:61] "kube-scheduler-addons-579349" [71241258-781c-496f-aa51-c30681f7a38d] Running
	I0717 23:39:26.135296 1806717 system_pods.go:61] "metrics-server-844d8db974-4m7tc" [0a10e3b3-112a-449a-9a48-3f6cdaf01d6a] Running
	I0717 23:39:26.135317 1806717 system_pods.go:61] "registry-proxy-rb9fx" [3e2266e7-468f-409e-bde8-4a46879b119d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 23:39:26.135338 1806717 system_pods.go:61] "registry-rn6hq" [bce4f7ce-2d6a-4082-97c3-291f3eb7fcc2] Running
	I0717 23:39:26.135373 1806717 system_pods.go:61] "snapshot-controller-75bbb956b9-5j7g4" [d8844522-7167-4fa3-888d-d8fff0e3450e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 23:39:26.135397 1806717 system_pods.go:61] "snapshot-controller-75bbb956b9-hqfst" [f28f8e86-c831-41d4-bacf-0706517d8b2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 23:39:26.135414 1806717 system_pods.go:61] "storage-provisioner" [b16d5357-2a1d-4db6-b934-faf43c418717] Running
	I0717 23:39:26.135435 1806717 system_pods.go:74] duration metric: took 23.530127ms to wait for pod list to return data ...
	I0717 23:39:26.135455 1806717 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:39:26.143307 1806717 default_sa.go:45] found service account: "default"
	I0717 23:39:26.143365 1806717 default_sa.go:55] duration metric: took 7.883359ms for default service account to be created ...
	I0717 23:39:26.143402 1806717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:39:26.155232 1806717 system_pods.go:86] 17 kube-system pods found
	I0717 23:39:26.155306 1806717 system_pods.go:89] "coredns-5d78c9869d-xr68f" [94652c88-140d-44d8-8d23-0a1f75e65abc] Running
	I0717 23:39:26.155332 1806717 system_pods.go:89] "csi-hostpath-attacher-0" [f9dcd48d-5bb8-493f-a448-dac0cf3f8ee5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 23:39:26.155354 1806717 system_pods.go:89] "csi-hostpath-resizer-0" [4b5b14c3-d4a7-4ec8-a7b1-8f130698b31a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 23:39:26.155395 1806717 system_pods.go:89] "csi-hostpathplugin-594b5" [19607f31-c4bc-4b9b-9789-c1b4328c24b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 23:39:26.155422 1806717 system_pods.go:89] "etcd-addons-579349" [665ee5d4-86df-4762-97b5-5ebc1540ef2a] Running
	I0717 23:39:26.155442 1806717 system_pods.go:89] "kindnet-nlpqh" [bb38a46d-a568-4adc-87f0-16744aba40aa] Running
	I0717 23:39:26.155461 1806717 system_pods.go:89] "kube-apiserver-addons-579349" [13795444-8760-4195-ab57-e5c415e48c67] Running
	I0717 23:39:26.155481 1806717 system_pods.go:89] "kube-controller-manager-addons-579349" [5267a6f0-a192-489f-9d39-bb56b9985ab2] Running
	I0717 23:39:26.155511 1806717 system_pods.go:89] "kube-ingress-dns-minikube" [c1500102-8d64-4ca7-b348-9dea02fb4cc6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 23:39:26.155536 1806717 system_pods.go:89] "kube-proxy-t2m8c" [e3f8be82-028c-4963-977e-2149a6a0d092] Running
	I0717 23:39:26.155557 1806717 system_pods.go:89] "kube-scheduler-addons-579349" [71241258-781c-496f-aa51-c30681f7a38d] Running
	I0717 23:39:26.155576 1806717 system_pods.go:89] "metrics-server-844d8db974-4m7tc" [0a10e3b3-112a-449a-9a48-3f6cdaf01d6a] Running
	I0717 23:39:26.155620 1806717 system_pods.go:89] "registry-proxy-rb9fx" [3e2266e7-468f-409e-bde8-4a46879b119d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 23:39:26.155643 1806717 system_pods.go:89] "registry-rn6hq" [bce4f7ce-2d6a-4082-97c3-291f3eb7fcc2] Running
	I0717 23:39:26.155665 1806717 system_pods.go:89] "snapshot-controller-75bbb956b9-5j7g4" [d8844522-7167-4fa3-888d-d8fff0e3450e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 23:39:26.155691 1806717 system_pods.go:89] "snapshot-controller-75bbb956b9-hqfst" [f28f8e86-c831-41d4-bacf-0706517d8b2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 23:39:26.155723 1806717 system_pods.go:89] "storage-provisioner" [b16d5357-2a1d-4db6-b934-faf43c418717] Running
	I0717 23:39:26.155750 1806717 system_pods.go:126] duration metric: took 12.325957ms to wait for k8s-apps to be running ...
	I0717 23:39:26.155772 1806717 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:39:26.155856 1806717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:39:26.175645 1806717 system_svc.go:56] duration metric: took 19.865064ms WaitForService to wait for kubelet.
	I0717 23:39:26.175715 1806717 kubeadm.go:581] duration metric: took 42.862903113s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:39:26.175751 1806717 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:39:26.180289 1806717 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 23:39:26.180361 1806717 node_conditions.go:123] node cpu capacity is 2
	I0717 23:39:26.180395 1806717 node_conditions.go:105] duration metric: took 4.624766ms to run NodePressure ...
	I0717 23:39:26.180422 1806717 start.go:228] waiting for startup goroutines ...
	I0717 23:39:26.201687 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:26.314510 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:26.454760 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:26.466297 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:26.697653 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:26.813095 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:26.967590 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:26.980593 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:27.197841 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:27.313612 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:27.455557 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:27.466623 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:27.698971 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:27.811802 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:27.956626 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:27.967040 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:28.200326 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:28.320771 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:28.458615 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:28.473498 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:28.698341 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:28.813467 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:28.957418 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:28.981276 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:29.198769 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:29.313663 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:29.456819 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:29.467883 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:29.699273 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:29.815506 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:29.955802 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:29.975639 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:30.203294 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:30.314293 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:30.456615 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:30.467450 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:30.699805 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:30.814787 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:30.955984 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:30.967287 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:31.198457 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:31.316135 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:31.456518 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:31.466180 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 23:39:31.698162 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:31.813595 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:31.958689 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:31.974000 1806717 kapi.go:107] duration metric: took 43.544043816s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 23:39:32.198113 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:32.312846 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:32.455273 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:32.698109 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:32.812016 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:32.954315 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:33.198326 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:33.314761 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:33.466617 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:33.698044 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:33.813020 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:33.955208 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:34.198506 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:34.314899 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:34.454671 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:34.700104 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:34.818767 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:34.958713 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:35.207670 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:35.330630 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:35.462379 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:35.698648 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:35.830611 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:35.955053 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:36.199089 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:36.320636 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:36.457271 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:36.698572 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:36.816425 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:36.955434 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:37.198610 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:37.313619 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:37.456073 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:37.698069 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:37.814931 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:37.957182 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:38.197644 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:38.313476 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:38.454770 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:38.698519 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:38.813782 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:38.955613 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:39.204010 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:39.312907 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:39.455705 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:39.698472 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:39.817251 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:39.955250 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:40.200036 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:40.314148 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:40.455708 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:40.701114 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:40.831568 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:40.959004 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:41.199017 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:41.313086 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:41.455858 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:41.698283 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:41.813374 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:41.955865 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:42.197952 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:42.323684 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:42.456386 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:42.698477 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:42.813646 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:42.963099 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:43.198404 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:43.322399 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:43.454748 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:43.697745 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:43.813404 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:43.955113 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:44.198699 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:44.312870 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:44.454673 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:44.698182 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:44.827386 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:44.955208 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:45.199352 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:45.315406 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:45.459629 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:45.697386 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:45.813236 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:45.954888 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:46.198130 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:46.312322 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:46.455856 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:46.698040 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:46.812862 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:46.954726 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:47.200504 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:47.314294 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:47.456008 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:47.697711 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:47.814597 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:47.956329 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:48.198447 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:48.314155 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:48.463613 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:48.699032 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:48.831854 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:48.955745 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:49.202453 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:49.320463 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:49.455611 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:49.697528 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:49.813123 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:49.954775 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:50.198694 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:50.312780 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:50.455875 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:50.699776 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:50.816567 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:50.955691 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:51.197934 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:51.314111 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:51.455409 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:51.698868 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:51.813945 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:51.956517 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:52.201036 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:52.313323 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:52.455642 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:52.697512 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:52.824411 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:52.955084 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:53.201366 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:53.318815 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:53.458208 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:53.697955 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:53.813264 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:53.956303 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:54.198393 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:54.314000 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:54.459071 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:54.698548 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:54.813765 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:54.956692 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:55.198477 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:55.313337 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:55.459729 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:55.698135 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:55.828055 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:55.956221 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:56.201026 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:56.317765 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:56.461929 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:56.698061 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:56.814079 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:56.955861 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:57.200170 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:57.317012 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:57.457636 1806717 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 23:39:57.700077 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:57.816383 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:58.019148 1806717 kapi.go:107] duration metric: took 1m9.591574582s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 23:39:58.208907 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:58.318975 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:58.698228 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:58.812535 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:59.197557 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:59.312800 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:39:59.698358 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:39:59.812349 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:00.198020 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:40:00.316386 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:00.697529 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:40:00.815237 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:01.205054 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 23:40:01.315262 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:01.698838 1806717 kapi.go:107] duration metric: took 1m8.073302097s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 23:40:01.700964 1806717 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-579349 cluster.
	I0717 23:40:01.702657 1806717 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 23:40:01.704647 1806717 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 23:40:01.813136 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:02.313083 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:02.814150 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:03.312140 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:03.813644 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:04.312316 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:04.814758 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:05.313186 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:05.821922 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:06.317661 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:06.812975 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:07.313366 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:07.812507 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:08.313523 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:08.812396 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:09.312279 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:09.812776 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:10.312697 1806717 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 23:40:10.815410 1806717 kapi.go:107] duration metric: took 1m22.020023436s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 23:40:10.817280 1806717 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0717 23:40:10.819084 1806717 addons.go:502] enable addons completed in 1m28.207002696s: enabled=[cloud-spanner default-storageclass storage-provisioner ingress-dns metrics-server inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0717 23:40:10.819138 1806717 start.go:233] waiting for cluster config update ...
	I0717 23:40:10.819161 1806717 start.go:242] writing updated cluster config ...
	I0717 23:40:10.819539 1806717 ssh_runner.go:195] Run: rm -f paused
	I0717 23:40:10.888094 1806717 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:40:10.890272 1806717 out.go:177] * Done! kubectl is now configured to use "addons-579349" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 23:43:31 addons-579349 conmon[4582]: conmon f44f446133cd0620392e <ninfo>: container 4593 exited with status 137
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.239087755Z" level=info msg="Stopped container f44f446133cd0620392e8c4eda2dbcdcf0b6beade0ff9ac3f84a490cce5c3bcb: ingress-nginx/ingress-nginx-controller-7799c6795f-svb26/controller" id=5d418603-033b-4eb0-8b4f-a0843e3034ba name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.239839239Z" level=info msg="Stopping pod sandbox: 80cd25d350a777900260ac91ce716a67734cb58a7fcb3eada81f28e4e34b5aec" id=2002268b-a7fe-4071-aa10-1e4102e154dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.244214071Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-EBUQMUL5EOZQS5QH - [0:0]\n:KUBE-HP-OSFJX6ABBFPFR7TX - [0:0]\n-X KUBE-HP-EBUQMUL5EOZQS5QH\n-X KUBE-HP-OSFJX6ABBFPFR7TX\nCOMMIT\n"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.246564833Z" level=info msg="Closing host port tcp:80"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.246620931Z" level=info msg="Closing host port tcp:443"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.248376712Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.248405406Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.248576661Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-svb26 Namespace:ingress-nginx ID:80cd25d350a777900260ac91ce716a67734cb58a7fcb3eada81f28e4e34b5aec UID:758abad8-4432-4676-a383-36eeffc634c5 NetNS:/var/run/netns/8554be28-c39f-4731-94a1-8b0427442485 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.248715179Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-svb26 from CNI network \"kindnet\" (type=ptp)"
	Jul 17 23:43:31 addons-579349 crio[886]: time="2023-07-17 23:43:31.279995349Z" level=info msg="Stopped pod sandbox: 80cd25d350a777900260ac91ce716a67734cb58a7fcb3eada81f28e4e34b5aec" id=2002268b-a7fe-4071-aa10-1e4102e154dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.193538376Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7d5f8626-545a-45c4-bb93-13c17034b75a name=/runtime.v1.ImageService/ImageStatus
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.193762530Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7d5f8626-545a-45c4-bb93-13c17034b75a name=/runtime.v1.ImageService/ImageStatus
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.194580819Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=8c23ed45-0c99-4032-a84f-033b0beb8fed name=/runtime.v1.ImageService/ImageStatus
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.194773859Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8c23ed45-0c99-4032-a84f-033b0beb8fed name=/runtime.v1.ImageService/ImageStatus
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.195551533Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-l8bzw/hello-world-app" id=f8035c6c-460c-4c5d-8a8c-b4f8c07698ba name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.195653251Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.252192360Z" level=info msg="Removing container: f44f446133cd0620392e8c4eda2dbcdcf0b6beade0ff9ac3f84a490cce5c3bcb" id=22f863ab-599e-49d0-9378-4d303b21a5a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.274797983Z" level=info msg="Removed container f44f446133cd0620392e8c4eda2dbcdcf0b6beade0ff9ac3f84a490cce5c3bcb: ingress-nginx/ingress-nginx-controller-7799c6795f-svb26/controller" id=22f863ab-599e-49d0-9378-4d303b21a5a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.292635302Z" level=info msg="Created container 33f3c4df6627a7237f4bd108e084c6f3e99a20a0b589b86a028892ebf1f6bc61: default/hello-world-app-65bdb79f98-l8bzw/hello-world-app" id=f8035c6c-460c-4c5d-8a8c-b4f8c07698ba name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.293607666Z" level=info msg="Starting container: 33f3c4df6627a7237f4bd108e084c6f3e99a20a0b589b86a028892ebf1f6bc61" id=b618bd3e-2a8c-404e-8ba9-731a749f0cf1 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 23:43:32 addons-579349 crio[886]: time="2023-07-17 23:43:32.307919949Z" level=info msg="Started container" PID=7955 containerID=33f3c4df6627a7237f4bd108e084c6f3e99a20a0b589b86a028892ebf1f6bc61 description=default/hello-world-app-65bdb79f98-l8bzw/hello-world-app id=b618bd3e-2a8c-404e-8ba9-731a749f0cf1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81ce2ef5da30351690cc8d1983f78ae7e6dc0d9c47c9dcfab32dea7a595021d8
	Jul 17 23:43:32 addons-579349 conmon[7935]: conmon 33f3c4df6627a7237f4b <ninfo>: container 7955 exited with status 1
	Jul 17 23:43:33 addons-579349 crio[886]: time="2023-07-17 23:43:33.256105894Z" level=info msg="Removing container: 219a06fb8fc0fba870e37979814395d529eea0b65f12b907ba5ec69aed433a29" id=b3f30f16-a36d-4b50-ae55-0f4e3f78dee2 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 23:43:33 addons-579349 crio[886]: time="2023-07-17 23:43:33.285827548Z" level=info msg="Removed container 219a06fb8fc0fba870e37979814395d529eea0b65f12b907ba5ec69aed433a29: default/hello-world-app-65bdb79f98-l8bzw/hello-world-app" id=b3f30f16-a36d-4b50-ae55-0f4e3f78dee2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33f3c4df6627a       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                             6 seconds ago       Exited              hello-world-app           2                   81ce2ef5da303       hello-world-app-65bdb79f98-l8bzw
	c57f2d8f458bb       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   f033def3f2b1f       nginx
	42faee2e67320       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        3 minutes ago       Running             headlamp                  0                   1660d1a68ded3       headlamp-66f6498c69-b6mq4
	dd545929ea759       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   e225cc20081fe       gcp-auth-58478865f7-nsx4p
	e459ce80d0f64       8f2588812ab2947d53d2f99b11142e2be088330ec67837bb82801c0d3501af78                                                             4 minutes ago       Exited              patch                     1                   b37bf3da94a9f       ingress-nginx-admission-patch-hvvmj
	1915f575d4ae2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              create                    0                   05679c311d7c6       ingress-nginx-admission-create-gtsp9
	e51f0f0de0e91       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   5481cd266f6b1       storage-provisioner
	abe4f1be284c3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   bb41943695f03       coredns-5d78c9869d-xr68f
	0103f0350eada       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a                                                             4 minutes ago       Running             kube-proxy                0                   0c42ae78ff596       kube-proxy-t2m8c
	9dd12fd2733eb       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                             4 minutes ago       Running             kindnet-cni               0                   ca11ca0a4179b       kindnet-nlpqh
	89622deb3a699       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8                                                             5 minutes ago       Running             kube-controller-manager   0                   8e9c9bda0aa7e       kube-controller-manager-addons-579349
	50ef0ce6170b2       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540                                                             5 minutes ago       Running             kube-scheduler            0                   aa56ff86b5aa0       kube-scheduler-addons-579349
	e7446bf2b25f0       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473                                                             5 minutes ago       Running             kube-apiserver            0                   150e4a2f316cb       kube-apiserver-addons-579349
	b1cf69682990b       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                             5 minutes ago       Running             etcd                      0                   67fb0f9c41980       etcd-addons-579349
	
	* 
	* ==> coredns [abe4f1be284c30ba2d0410a75b94eb4928a7d0221be8707cfbe16833e0f12d8e] <==
	* [INFO] 10.244.0.16:53101 - 37630 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065361s
	[INFO] 10.244.0.16:53101 - 61839 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047975s
	[INFO] 10.244.0.16:53101 - 63403 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045208s
	[INFO] 10.244.0.16:33262 - 40332 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002424164s
	[INFO] 10.244.0.16:33262 - 42251 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115207s
	[INFO] 10.244.0.16:53101 - 25083 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000972839s
	[INFO] 10.244.0.16:53101 - 52463 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071974s
	[INFO] 10.244.0.16:51827 - 43940 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115059s
	[INFO] 10.244.0.16:36659 - 28982 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00014925s
	[INFO] 10.244.0.16:51827 - 12914 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083421s
	[INFO] 10.244.0.16:51827 - 9414 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068479s
	[INFO] 10.244.0.16:51827 - 54555 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055384s
	[INFO] 10.244.0.16:51827 - 48373 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054891s
	[INFO] 10.244.0.16:51827 - 20088 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037973s
	[INFO] 10.244.0.16:36659 - 32782 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.002078633s
	[INFO] 10.244.0.16:51827 - 40768 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001243056s
	[INFO] 10.244.0.16:36659 - 23508 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093439s
	[INFO] 10.244.0.16:36659 - 57416 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091043s
	[INFO] 10.244.0.16:36659 - 20156 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000222062s
	[INFO] 10.244.0.16:51827 - 16243 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001130434s
	[INFO] 10.244.0.16:51827 - 12133 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104491s
	[INFO] 10.244.0.16:36659 - 64997 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087006s
	[INFO] 10.244.0.16:36659 - 14421 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001075772s
	[INFO] 10.244.0.16:36659 - 245 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001041007s
	[INFO] 10.244.0.16:36659 - 49698 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073336s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-579349
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-579349
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=addons-579349
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T23_38_30_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-579349
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 23:38:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-579349
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:43:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:43:35 +0000   Mon, 17 Jul 2023 23:38:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:43:35 +0000   Mon, 17 Jul 2023 23:38:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:43:35 +0000   Mon, 17 Jul 2023 23:38:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:43:35 +0000   Mon, 17 Jul 2023 23:39:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-579349
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 998b7be9f4174b9480bfb2f86943e49a
	  System UUID:                f6829f87-c8e8-43fc-8c6f-00431785e173
	  Boot ID:                    233fb95c-536d-4fc4-882b-c04fac35e1a2
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-l8bzw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  gcp-auth                    gcp-auth-58478865f7-nsx4p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  headlamp                    headlamp-66f6498c69-b6mq4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 coredns-5d78c9869d-xr68f                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m56s
	  kube-system                 etcd-addons-579349                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m9s
	  kube-system                 kindnet-nlpqh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-addons-579349             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-controller-manager-addons-579349    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-proxy-t2m8c                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-addons-579349             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node addons-579349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node addons-579349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x8 over 5m18s)  kubelet          Node addons-579349 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m9s                   kubelet          Node addons-579349 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s                   kubelet          Node addons-579349 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s                   kubelet          Node addons-579349 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m56s                  node-controller  Node addons-579349 event: Registered Node addons-579349 in Controller
	  Normal  NodeReady                4m23s                  kubelet          Node addons-579349 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001147] FS-Cache: O-key=[8] 'dc643b0000000000'
	[  +0.000737] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000a000c51
	[  +0.001149] FS-Cache: N-key=[8] 'dc643b0000000000'
	[  +0.003019] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001021] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=00000000d141867d
	[  +0.001211] FS-Cache: O-key=[8] 'dc643b0000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001091] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000e847315
	[  +0.001150] FS-Cache: N-key=[8] 'dc643b0000000000'
	[  +2.830172] FS-Cache: Duplicate cookie detected
	[  +0.000675] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000934] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=000000009d9318af
	[  +0.001153] FS-Cache: O-key=[8] 'db643b0000000000'
	[  +0.000692] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000a000c51
	[  +0.001023] FS-Cache: N-key=[8] 'db643b0000000000'
	[  +0.302203] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001000] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=000000004d26fdd8
	[  +0.001191] FS-Cache: O-key=[8] 'e1643b0000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000983] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=00000000fa93f3dd
	[  +0.001132] FS-Cache: N-key=[8] 'e1643b0000000000'
	
	* 
	* ==> etcd [b1cf69682990b3e788a81d05bf319da7f1d400bc6c2f627104965c8b4c6c49f3] <==
	* {"level":"info","ts":"2023-07-17T23:38:22.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T23:38:22.450Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-579349 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T23:38:22.450Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:38:22.450Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:38:22.452Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T23:38:22.452Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-07-17T23:38:22.452Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:38:22.458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T23:38:22.486Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:38:22.499Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:38:22.499Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:38:22.499Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:38:43.546Z","caller":"traceutil/trace.go:171","msg":"trace[833175041] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"133.07138ms","start":"2023-07-17T23:38:43.413Z","end":"2023-07-17T23:38:43.546Z","steps":["trace[833175041] 'process raft request'  (duration: 125.026816ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:38:45.413Z","caller":"traceutil/trace.go:171","msg":"trace[949486818] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"115.478383ms","start":"2023-07-17T23:38:45.298Z","end":"2023-07-17T23:38:45.413Z","steps":["trace[949486818] 'process raft request'  (duration: 16.119888ms)","trace[949486818] 'compare'  (duration: 99.252232ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T23:38:45.414Z","caller":"traceutil/trace.go:171","msg":"trace[8697758] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"110.455122ms","start":"2023-07-17T23:38:45.304Z","end":"2023-07-17T23:38:45.414Z","steps":["trace[8697758] 'process raft request'  (duration: 110.388292ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:38:45.452Z","caller":"traceutil/trace.go:171","msg":"trace[363796618] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"138.232748ms","start":"2023-07-17T23:38:45.314Z","end":"2023-07-17T23:38:45.452Z","steps":["trace[363796618] 'process raft request'  (duration: 127.178737ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:38:45.963Z","caller":"traceutil/trace.go:171","msg":"trace[2066010517] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"100.247848ms","start":"2023-07-17T23:38:45.863Z","end":"2023-07-17T23:38:45.963Z","steps":["trace[2066010517] 'process raft request'  (duration: 43.140884ms)","trace[2066010517] 'compare'  (duration: 56.512861ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T23:38:45.981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.388108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:38:45.981Z","caller":"traceutil/trace.go:171","msg":"trace[1436943916] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:423; }","duration":"117.480545ms","start":"2023-07-17T23:38:45.863Z","end":"2023-07-17T23:38:45.981Z","steps":["trace[1436943916] 'agreement among raft nodes before linearized reading'  (duration: 117.357978ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:38:45.981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.662943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:38:45.981Z","caller":"traceutil/trace.go:171","msg":"trace[1318351719] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:423; }","duration":"117.706529ms","start":"2023-07-17T23:38:45.863Z","end":"2023-07-17T23:38:45.981Z","steps":["trace[1318351719] 'agreement among raft nodes before linearized reading'  (duration: 117.64838ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:38:45.981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.006669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2023-07-17T23:38:45.981Z","caller":"traceutil/trace.go:171","msg":"trace[235085866] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:423; }","duration":"118.036921ms","start":"2023-07-17T23:38:45.863Z","end":"2023-07-17T23:38:45.981Z","steps":["trace[235085866] 'agreement among raft nodes before linearized reading'  (duration: 117.945665ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:38:46.364Z","caller":"traceutil/trace.go:171","msg":"trace[116867748] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"120.693945ms","start":"2023-07-17T23:38:46.243Z","end":"2023-07-17T23:38:46.364Z","steps":["trace[116867748] 'process raft request'  (duration: 120.148958ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:38:46.408Z","caller":"traceutil/trace.go:171","msg":"trace[505957043] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"164.815833ms","start":"2023-07-17T23:38:46.243Z","end":"2023-07-17T23:38:46.408Z","steps":["trace[505957043] 'process raft request'  (duration: 120.272378ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [dd545929ea759f0a05fe941b5d94e7968e22c2dc2d0910b86b8917973b31173a] <==
	* 2023/07/17 23:40:01 GCP Auth Webhook started!
	2023/07/17 23:40:18 Ready to marshal response ...
	2023/07/17 23:40:18 Ready to write response ...
	2023/07/17 23:40:18 Ready to marshal response ...
	2023/07/17 23:40:18 Ready to write response ...
	2023/07/17 23:40:18 Ready to marshal response ...
	2023/07/17 23:40:18 Ready to write response ...
	2023/07/17 23:40:21 Ready to marshal response ...
	2023/07/17 23:40:21 Ready to write response ...
	2023/07/17 23:40:47 Ready to marshal response ...
	2023/07/17 23:40:47 Ready to write response ...
	2023/07/17 23:40:47 Ready to marshal response ...
	2023/07/17 23:40:47 Ready to write response ...
	2023/07/17 23:41:07 Ready to marshal response ...
	2023/07/17 23:41:07 Ready to write response ...
	2023/07/17 23:43:12 Ready to marshal response ...
	2023/07/17 23:43:12 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:43:38 up  8:26,  0 users,  load average: 0.40, 1.60, 2.19
	Linux addons-579349 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [9dd12fd2733eb1fa3c452576e22f38f39f145fc13f41fb522d0b4de04200a152] <==
	* I0717 23:41:34.903349       1 main.go:227] handling current node
	I0717 23:41:44.907949       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:41:44.907978       1 main.go:227] handling current node
	I0717 23:41:54.920533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:41:54.920564       1 main.go:227] handling current node
	I0717 23:42:04.930269       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:42:04.930298       1 main.go:227] handling current node
	I0717 23:42:14.936401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:42:14.936435       1 main.go:227] handling current node
	I0717 23:42:24.942643       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:42:24.942673       1 main.go:227] handling current node
	I0717 23:42:34.946747       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:42:34.946776       1 main.go:227] handling current node
	I0717 23:42:44.958331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:42:44.958359       1 main.go:227] handling current node
	I0717 23:42:54.968553       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:42:54.968580       1 main.go:227] handling current node
	I0717 23:43:04.973224       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:43:04.973548       1 main.go:227] handling current node
	I0717 23:43:14.977477       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:43:14.977511       1 main.go:227] handling current node
	I0717 23:43:24.989241       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:43:24.989274       1 main.go:227] handling current node
	I0717 23:43:34.994865       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:43:34.994894       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [e7446bf2b25f0865382ececa806c40ec220a2286c3c586f26581c0d00ca637b2] <==
	* I0717 23:41:24.728310       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 23:41:24.738930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 23:41:24.739031       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 23:41:24.766682       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 23:41:24.766747       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 23:41:24.792546       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 23:41:24.792706       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 23:41:24.822754       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 23:41:24.822894       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 23:41:25.728224       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 23:41:25.823813       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 23:41:25.858176       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0717 23:41:27.023070       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 23:41:27.023101       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:41:27.023143       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:41:27.023152       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:41:27.035390       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0717 23:42:27.023337       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 23:42:27.023383       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:42:27.023424       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:42:27.023433       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:43:13.111314       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.101.35.100]
	E0717 23:43:29.263205       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400e0cd230), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400b7eccd0), ResponseWriter:(*httpsnoop.rw)(0x400b7eccd0), Flusher:(*httpsnoop.rw)(0x400b7eccd0), CloseNotifier:(*httpsnoop.rw)(0x400b7eccd0), Pusher:(*httpsnoop.rw)(0x400b7eccd0)}}, encoder:(*versioning.codec)(0x400f9faf00), memAllocator:(*runtime.Allocator)(0x40033545e8)})
	
	* 
	* ==> kube-controller-manager [89622deb3a699dd1ebf0e246c0bb379a52337d780cfed007cb290880612f8e84] <==
	* E0717 23:41:57.729054       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:41:58.776249       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:41:58.776285       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:42:00.764189       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:42:00.764308       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:42:25.811413       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:42:25.811461       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:42:38.555146       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:42:38.555180       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:42:47.278881       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:42:47.278917       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:42:52.625332       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:42:52.625365       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 23:43:12.851426       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 23:43:12.886040       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-l8bzw"
	W0717 23:43:13.506808       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:43:13.506843       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:43:17.174351       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:43:17.174386       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 23:43:23.712917       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:43:23.712949       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 23:43:30.004416       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 23:43:30.035384       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0717 23:43:36.314125       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 23:43:36.314158       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [0103f0350eada67f3eb6d2d8581fd4d028c6e5fd6f519d1c39d1490d4b23cc44] <==
	* I0717 23:38:47.686356       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 23:38:47.703329       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 23:38:47.703385       1 server_others.go:554] "Using iptables proxy"
	I0717 23:38:47.852008       1 server_others.go:192] "Using iptables Proxier"
	I0717 23:38:47.852117       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 23:38:47.852157       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 23:38:47.852197       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 23:38:47.852301       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 23:38:47.852865       1 server.go:658] "Version info" version="v1.27.3"
	I0717 23:38:47.853111       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:38:47.866219       1 config.go:188] "Starting service config controller"
	I0717 23:38:47.866365       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 23:38:47.866618       1 config.go:97] "Starting endpoint slice config controller"
	I0717 23:38:47.866675       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 23:38:47.867538       1 config.go:315] "Starting node config controller"
	I0717 23:38:47.867620       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 23:38:47.966879       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 23:38:47.967041       1 shared_informer.go:318] Caches are synced for service config
	I0717 23:38:47.992593       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [50ef0ce6170b27b8a3293fddee93a6400eecb927e31aa946d061c572dbf7743b] <==
	* W0717 23:38:26.815395       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 23:38:26.815421       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 23:38:26.815495       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 23:38:26.815513       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 23:38:26.815563       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 23:38:26.815581       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 23:38:26.815637       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 23:38:26.815652       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 23:38:26.815702       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 23:38:26.815717       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 23:38:26.815756       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 23:38:26.815781       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 23:38:26.815863       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 23:38:26.815878       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 23:38:26.815946       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 23:38:26.815961       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 23:38:26.815997       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 23:38:26.816020       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 23:38:26.816126       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 23:38:26.816144       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 23:38:26.816191       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 23:38:26.816206       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 23:38:26.816268       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 23:38:26.816283       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0717 23:38:27.705063       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.421167    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3b3bd0b9f88a48beeb4c7ff7ba3814b099cca2aaff5cba29712f75f95ea9e436/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3b3bd0b9f88a48beeb4c7ff7ba3814b099cca2aaff5cba29712f75f95ea9e436/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.421426    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3544bcac389e84635d9abb658d62e9fb871fa1738fc8b56a3853f9f4b566fe68/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3544bcac389e84635d9abb658d62e9fb871fa1738fc8b56a3853f9f4b566fe68/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.424037    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6bef0de1cd08888c0da4572a3d6ff6cec9d62047ecc8df124e883e7c8b2f0d1f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6bef0de1cd08888c0da4572a3d6ff6cec9d62047ecc8df124e883e7c8b2f0d1f/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.449101    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e3c875a115d03e386d5aff4f8bc5c112783aabf4926d32b812f4f19a9559ca0d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e3c875a115d03e386d5aff4f8bc5c112783aabf4926d32b812f4f19a9559ca0d/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.452231    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ed011d1f86583b6a59f84644591270221f80f2282df4ac14f42528f99928ec9d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ed011d1f86583b6a59f84644591270221f80f2282df4ac14f42528f99928ec9d/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.471896    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3b3bd0b9f88a48beeb4c7ff7ba3814b099cca2aaff5cba29712f75f95ea9e436/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3b3bd0b9f88a48beeb4c7ff7ba3814b099cca2aaff5cba29712f75f95ea9e436/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: E0717 23:43:29.473231    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4e91caf000ef3026e1ad7e3329b40d28c8e855db1ef8235921b8455a24fe4834/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4e91caf000ef3026e1ad7e3329b40d28c8e855db1ef8235921b8455a24fe4834/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 23:43:29 addons-579349 kubelet[1357]: W0717 23:43:29.932698    1357 container.go:586] Failed to update stats for container "/docker/1c83b133650d45aacfab3b0c93fcc908fab4cf98a6b857d210de53fd6f61143e/crio-4bbe2e2aba9cebcb04b02041170f7e47860d7d1b9df5157ca1e4c54798eb4fa1": unable to determine device info for dir: /var/lib/containers/storage/overlay/eca3ed9a3bd991e7f998116c428900677ffd1601c26233117e7abec5b1b34a21/diff: stat failed on /var/lib/containers/storage/overlay/eca3ed9a3bd991e7f998116c428900677ffd1601c26233117e7abec5b1b34a21/diff with error: no such file or directory, continuing to push stats
	Jul 17 23:43:30 addons-579349 kubelet[1357]: E0717 23:43:30.058376    1357 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-svb26.1772cc45b97e2e6b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-svb26", UID:"758abad8-4432-4676-a383-36eeffc634c5", APIVersion:"v1", ResourceVersion:"760", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-579349"}, FirstTimestamp:time.Date(2023, time.July, 17, 23, 43, 30, 54221419, time.Local), LastTimestamp:time.Date(2023, time.July, 17, 23, 43, 30, 54221419, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-svb26.1772cc45b97e2e6b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 23:43:30 addons-579349 kubelet[1357]: W0717 23:43:30.198022    1357 container.go:586] Failed to update stats for container "/crio-4bbe2e2aba9cebcb04b02041170f7e47860d7d1b9df5157ca1e4c54798eb4fa1": unable to determine device info for dir: /var/lib/containers/storage/overlay/eca3ed9a3bd991e7f998116c428900677ffd1601c26233117e7abec5b1b34a21/diff: stat failed on /var/lib/containers/storage/overlay/eca3ed9a3bd991e7f998116c428900677ffd1601c26233117e7abec5b1b34a21/diff with error: no such file or directory, continuing to push stats
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.194213    1357 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a2eae6d3-e243-46d2-860f-a1a75ae9bce2 path="/var/lib/kubelet/pods/a2eae6d3-e243-46d2-860f-a1a75ae9bce2/volumes"
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.195232    1357 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c1500102-8d64-4ca7-b348-9dea02fb4cc6 path="/var/lib/kubelet/pods/c1500102-8d64-4ca7-b348-9dea02fb4cc6/volumes"
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.195657    1357 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ca6f164c-d60f-45f5-aabe-1e3a40cbcae6 path="/var/lib/kubelet/pods/ca6f164c-d60f-45f5-aabe-1e3a40cbcae6/volumes"
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.389796    1357 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/758abad8-4432-4676-a383-36eeffc634c5-webhook-cert\") pod \"758abad8-4432-4676-a383-36eeffc634c5\" (UID: \"758abad8-4432-4676-a383-36eeffc634c5\") "
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.389867    1357 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sld76\" (UniqueName: \"kubernetes.io/projected/758abad8-4432-4676-a383-36eeffc634c5-kube-api-access-sld76\") pod \"758abad8-4432-4676-a383-36eeffc634c5\" (UID: \"758abad8-4432-4676-a383-36eeffc634c5\") "
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.393113    1357 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758abad8-4432-4676-a383-36eeffc634c5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "758abad8-4432-4676-a383-36eeffc634c5" (UID: "758abad8-4432-4676-a383-36eeffc634c5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.394971    1357 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758abad8-4432-4676-a383-36eeffc634c5-kube-api-access-sld76" (OuterVolumeSpecName: "kube-api-access-sld76") pod "758abad8-4432-4676-a383-36eeffc634c5" (UID: "758abad8-4432-4676-a383-36eeffc634c5"). InnerVolumeSpecName "kube-api-access-sld76". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.490270    1357 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sld76\" (UniqueName: \"kubernetes.io/projected/758abad8-4432-4676-a383-36eeffc634c5-kube-api-access-sld76\") on node \"addons-579349\" DevicePath \"\""
	Jul 17 23:43:31 addons-579349 kubelet[1357]: I0717 23:43:31.490319    1357 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/758abad8-4432-4676-a383-36eeffc634c5-webhook-cert\") on node \"addons-579349\" DevicePath \"\""
	Jul 17 23:43:32 addons-579349 kubelet[1357]: I0717 23:43:32.192956    1357 scope.go:115] "RemoveContainer" containerID="219a06fb8fc0fba870e37979814395d529eea0b65f12b907ba5ec69aed433a29"
	Jul 17 23:43:32 addons-579349 kubelet[1357]: I0717 23:43:32.250144    1357 scope.go:115] "RemoveContainer" containerID="f44f446133cd0620392e8c4eda2dbcdcf0b6beade0ff9ac3f84a490cce5c3bcb"
	Jul 17 23:43:33 addons-579349 kubelet[1357]: I0717 23:43:33.193785    1357 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=758abad8-4432-4676-a383-36eeffc634c5 path="/var/lib/kubelet/pods/758abad8-4432-4676-a383-36eeffc634c5/volumes"
	Jul 17 23:43:33 addons-579349 kubelet[1357]: I0717 23:43:33.254165    1357 scope.go:115] "RemoveContainer" containerID="219a06fb8fc0fba870e37979814395d529eea0b65f12b907ba5ec69aed433a29"
	Jul 17 23:43:33 addons-579349 kubelet[1357]: I0717 23:43:33.254444    1357 scope.go:115] "RemoveContainer" containerID="33f3c4df6627a7237f4bd108e084c6f3e99a20a0b589b86a028892ebf1f6bc61"
	Jul 17 23:43:33 addons-579349 kubelet[1357]: E0717 23:43:33.254705    1357 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-l8bzw_default(a31a9215-b2db-4e28-939c-601081452d51)\"" pod="default/hello-world-app-65bdb79f98-l8bzw" podUID=a31a9215-b2db-4e28-939c-601081452d51
	
	* 
	* ==> storage-provisioner [e51f0f0de0e9181e3a68af94b8e18b462453eb3d6fb187ebf9b0f7d343f037ce] <==
	* I0717 23:39:16.074738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 23:39:16.089903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 23:39:16.090098       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 23:39:16.099979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 23:39:16.100234       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-579349_7b126515-b119-4a36-a4be-3837db1b2619!
	I0717 23:39:16.101198       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7913e546-de37-42dd-a2dc-68981a262d53", APIVersion:"v1", ResourceVersion:"830", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-579349_7b126515-b119-4a36-a4be-3837db1b2619 became leader
	I0717 23:39:16.201200       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-579349_7b126515-b119-4a36-a4be-3837db1b2619!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-579349 -n addons-579349
helpers_test.go:261: (dbg) Run:  kubectl --context addons-579349 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (173.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (274.110785ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-856061 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-856061 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.773076757s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-856061 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-856061 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae73a66f-dd46-43bf-8df3-8a845e3122f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae73a66f-dd46-43bf-8df3-8a845e3122f8] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.006571627s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0717 23:52:58.054479 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.059903 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.070152 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.090465 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.130719 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.210997 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.371422 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:58.691964 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:52:59.332847 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:53:00.613388 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:53:03.173564 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:53:08.293816 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:53:18.534155 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-856061 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.136428422s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-856061 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021365923s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons disable ingress-dns --alsologtostderr -v=1: (1.562036846s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons disable ingress --alsologtostderr -v=1
E0717 23:53:39.014349 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons disable ingress --alsologtostderr -v=1: (7.578399349s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-856061
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-856061:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2687b50b0b0adb9b1bab3a889d090156fca91902424032897be20ffd6da270e7",
	        "Created": "2023-07-17T23:49:22.232787493Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1834075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:49:22.54462969Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/2687b50b0b0adb9b1bab3a889d090156fca91902424032897be20ffd6da270e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2687b50b0b0adb9b1bab3a889d090156fca91902424032897be20ffd6da270e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/2687b50b0b0adb9b1bab3a889d090156fca91902424032897be20ffd6da270e7/hosts",
	        "LogPath": "/var/lib/docker/containers/2687b50b0b0adb9b1bab3a889d090156fca91902424032897be20ffd6da270e7/2687b50b0b0adb9b1bab3a889d090156fca91902424032897be20ffd6da270e7-json.log",
	        "Name": "/ingress-addon-legacy-856061",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-856061:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-856061",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e41e33a7a44e80dc19826a5454029bf661b6d4c31142e1a4efcd17d18de9e0c-init/diff:/var/lib/docker/overlay2/fb8637673150b5a3287a0dca2348bba5adfe3231dd83829c5a54b472b17aad64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e41e33a7a44e80dc19826a5454029bf661b6d4c31142e1a4efcd17d18de9e0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e41e33a7a44e80dc19826a5454029bf661b6d4c31142e1a4efcd17d18de9e0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e41e33a7a44e80dc19826a5454029bf661b6d4c31142e1a4efcd17d18de9e0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-856061",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-856061/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-856061",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-856061",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-856061",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c8033f2474befebd73ed4fe2c013e94d31ae2d48dffdfe7e1fd3a9948dcdca0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34678"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34677"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34674"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34676"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34675"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9c8033f2474b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-856061": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2687b50b0b0a",
	                        "ingress-addon-legacy-856061"
	                    ],
	                    "NetworkID": "612e55538e7344be5e70ebe7472ecac5bc7e1af912d57c439e0434ff419a2bc1",
	                    "EndpointID": "0e1517223651028074df029d0b65b11db2a5f503585f6f4dbeb1f59d53c125ba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-856061 -n ingress-addon-legacy-856061
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-856061 logs -n 25: (1.390099634s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-926032                                                   | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-926032                                                   | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-926032 ssh findmnt                                          | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-926032 ssh findmnt                                          | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-926032 ssh findmnt                                          | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-926032 ssh findmnt                                          | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-926032                                                   | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-926032 ssh pgrep                                            | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-926032 image build -t                                       | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | localhost/my-image:functional-926032                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-926032 image ls                                             | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	| image          | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-926032                                                      | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:48 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-926032                                                   | functional-926032           | jenkins | v1.31.0 | 17 Jul 23 23:48 UTC | 17 Jul 23 23:49 UTC |
	| start          | -p ingress-addon-legacy-856061                                         | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:49 UTC | 17 Jul 23 23:50 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-856061                                            | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:50 UTC | 17 Jul 23 23:50 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-856061                                            | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:50 UTC | 17 Jul 23 23:50 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-856061                                            | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:51 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-856061 ip                                         | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:53 UTC | 17 Jul 23 23:53 UTC |
	| addons         | ingress-addon-legacy-856061                                            | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:53 UTC | 17 Jul 23 23:53 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-856061                                            | ingress-addon-legacy-856061 | jenkins | v1.31.0 | 17 Jul 23 23:53 UTC | 17 Jul 23 23:53 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:49:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:49:01.113154 1833619 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:49:01.113364 1833619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:49:01.113377 1833619 out.go:309] Setting ErrFile to fd 2...
	I0717 23:49:01.113384 1833619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:49:01.113685 1833619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0717 23:49:01.114222 1833619 out.go:303] Setting JSON to false
	I0717 23:49:01.115318 1833619 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30685,"bootTime":1689607056,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 23:49:01.115392 1833619 start.go:138] virtualization:  
	I0717 23:49:01.117841 1833619 out.go:177] * [ingress-addon-legacy-856061] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 23:49:01.120488 1833619 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:49:01.122477 1833619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:49:01.120629 1833619 notify.go:220] Checking for updates...
	I0717 23:49:01.126899 1833619 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:49:01.128764 1833619 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0717 23:49:01.131163 1833619 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 23:49:01.133268 1833619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:49:01.135433 1833619 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:49:01.161173 1833619 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 23:49:01.161270 1833619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:49:01.243882 1833619 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 23:49:01.232641734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:49:01.243984 1833619 docker.go:294] overlay module found
	I0717 23:49:01.246106 1833619 out.go:177] * Using the docker driver based on user configuration
	I0717 23:49:01.247872 1833619 start.go:298] selected driver: docker
	I0717 23:49:01.247891 1833619 start.go:880] validating driver "docker" against <nil>
	I0717 23:49:01.247905 1833619 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:49:01.248500 1833619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:49:01.316855 1833619 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 23:49:01.307284949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:49:01.317015 1833619 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 23:49:01.317230 1833619 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 23:49:01.319175 1833619 out.go:177] * Using Docker driver with root privileges
	I0717 23:49:01.320909 1833619 cni.go:84] Creating CNI manager for ""
	I0717 23:49:01.320927 1833619 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:49:01.320949 1833619 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 23:49:01.320967 1833619 start_flags.go:319] config:
	{Name:ingress-addon-legacy-856061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-856061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:49:01.323203 1833619 out.go:177] * Starting control plane node ingress-addon-legacy-856061 in cluster ingress-addon-legacy-856061
	I0717 23:49:01.325340 1833619 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 23:49:01.327351 1833619 out.go:177] * Pulling base image ...
	I0717 23:49:01.329312 1833619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 23:49:01.329393 1833619 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 23:49:01.345923 1833619 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 23:49:01.345946 1833619 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 23:49:01.408920 1833619 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0717 23:49:01.408943 1833619 cache.go:57] Caching tarball of preloaded images
	I0717 23:49:01.409097 1833619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 23:49:01.411442 1833619 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 23:49:01.413214 1833619 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0717 23:49:01.535643 1833619 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0717 23:49:14.401632 1833619 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0717 23:49:14.401735 1833619 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0717 23:49:15.542945 1833619 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0717 23:49:15.543313 1833619 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/config.json ...
	I0717 23:49:15.543366 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/config.json: {Name:mkf6dda7ad8091e36d0fd9f167b3050b4ca5ec63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:15.543546 1833619 cache.go:195] Successfully downloaded all kic artifacts
	I0717 23:49:15.543595 1833619 start.go:365] acquiring machines lock for ingress-addon-legacy-856061: {Name:mk15c36bc12a3e8f385d2467ea9c2f07cdf5cb4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:49:15.543653 1833619 start.go:369] acquired machines lock for "ingress-addon-legacy-856061" in 46.884µs
	I0717 23:49:15.543677 1833619 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-856061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-856061 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:49:15.543744 1833619 start.go:125] createHost starting for "" (driver="docker")
	I0717 23:49:15.545990 1833619 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 23:49:15.546192 1833619 start.go:159] libmachine.API.Create for "ingress-addon-legacy-856061" (driver="docker")
	I0717 23:49:15.546218 1833619 client.go:168] LocalClient.Create starting
	I0717 23:49:15.546283 1833619 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem
	I0717 23:49:15.546327 1833619 main.go:141] libmachine: Decoding PEM data...
	I0717 23:49:15.546346 1833619 main.go:141] libmachine: Parsing certificate...
	I0717 23:49:15.546431 1833619 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem
	I0717 23:49:15.546456 1833619 main.go:141] libmachine: Decoding PEM data...
	I0717 23:49:15.546472 1833619 main.go:141] libmachine: Parsing certificate...
	I0717 23:49:15.546834 1833619 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-856061 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 23:49:15.563626 1833619 cli_runner.go:211] docker network inspect ingress-addon-legacy-856061 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 23:49:15.563712 1833619 network_create.go:281] running [docker network inspect ingress-addon-legacy-856061] to gather additional debugging logs...
	I0717 23:49:15.563737 1833619 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-856061
	W0717 23:49:15.580968 1833619 cli_runner.go:211] docker network inspect ingress-addon-legacy-856061 returned with exit code 1
	I0717 23:49:15.581006 1833619 network_create.go:284] error running [docker network inspect ingress-addon-legacy-856061]: docker network inspect ingress-addon-legacy-856061: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-856061 not found
	I0717 23:49:15.581020 1833619 network_create.go:286] output of [docker network inspect ingress-addon-legacy-856061]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-856061 not found
	
	** /stderr **
	I0717 23:49:15.581087 1833619 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 23:49:15.598282 1833619 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004ae970}
	I0717 23:49:15.598328 1833619 network_create.go:123] attempt to create docker network ingress-addon-legacy-856061 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 23:49:15.598391 1833619 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-856061 ingress-addon-legacy-856061
	I0717 23:49:15.663337 1833619 network_create.go:107] docker network ingress-addon-legacy-856061 192.168.49.0/24 created
	I0717 23:49:15.663371 1833619 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-856061" container
	I0717 23:49:15.663449 1833619 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 23:49:15.682892 1833619 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-856061 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-856061 --label created_by.minikube.sigs.k8s.io=true
	I0717 23:49:15.700664 1833619 oci.go:103] Successfully created a docker volume ingress-addon-legacy-856061
	I0717 23:49:15.700757 1833619 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-856061-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-856061 --entrypoint /usr/bin/test -v ingress-addon-legacy-856061:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 23:49:17.259038 1833619 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-856061-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-856061 --entrypoint /usr/bin/test -v ingress-addon-legacy-856061:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.558232908s)
	I0717 23:49:17.259071 1833619 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-856061
	I0717 23:49:17.259090 1833619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 23:49:17.259109 1833619 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 23:49:17.259206 1833619 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-856061:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 23:49:22.152277 1833619 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-856061:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.89301988s)
	I0717 23:49:22.152311 1833619 kic.go:199] duration metric: took 4.893197 seconds to extract preloaded images to volume
	W0717 23:49:22.152440 1833619 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 23:49:22.152560 1833619 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 23:49:22.217529 1833619 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-856061 --name ingress-addon-legacy-856061 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-856061 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-856061 --network ingress-addon-legacy-856061 --ip 192.168.49.2 --volume ingress-addon-legacy-856061:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 23:49:22.554068 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Running}}
	I0717 23:49:22.573170 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Status}}
	I0717 23:49:22.595459 1833619 cli_runner.go:164] Run: docker exec ingress-addon-legacy-856061 stat /var/lib/dpkg/alternatives/iptables
	I0717 23:49:22.690166 1833619 oci.go:144] the created container "ingress-addon-legacy-856061" has a running status.
	I0717 23:49:22.690192 1833619 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa...
	I0717 23:49:23.199029 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 23:49:23.199078 1833619 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 23:49:23.233137 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Status}}
	I0717 23:49:23.263194 1833619 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 23:49:23.263219 1833619 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-856061 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 23:49:23.343899 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Status}}
	I0717 23:49:23.367134 1833619 machine.go:88] provisioning docker machine ...
	I0717 23:49:23.367164 1833619 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-856061"
	I0717 23:49:23.367233 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:23.393811 1833619 main.go:141] libmachine: Using SSH client type: native
	I0717 23:49:23.394274 1833619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34678 <nil> <nil>}
	I0717 23:49:23.394293 1833619 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-856061 && echo "ingress-addon-legacy-856061" | sudo tee /etc/hostname
	I0717 23:49:23.618017 1833619 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-856061
	
	I0717 23:49:23.618114 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:23.649004 1833619 main.go:141] libmachine: Using SSH client type: native
	I0717 23:49:23.649438 1833619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34678 <nil> <nil>}
	I0717 23:49:23.649462 1833619 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-856061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-856061/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-856061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 23:49:23.792235 1833619 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 23:49:23.792276 1833619 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0717 23:49:23.792318 1833619 ubuntu.go:177] setting up certificates
	I0717 23:49:23.792326 1833619 provision.go:83] configureAuth start
	I0717 23:49:23.792387 1833619 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-856061
	I0717 23:49:23.811079 1833619 provision.go:138] copyHostCerts
	I0717 23:49:23.811121 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0717 23:49:23.811153 1833619 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0717 23:49:23.811165 1833619 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0717 23:49:23.811236 1833619 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0717 23:49:23.811315 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0717 23:49:23.811335 1833619 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0717 23:49:23.811342 1833619 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0717 23:49:23.811370 1833619 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0717 23:49:23.811416 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0717 23:49:23.811435 1833619 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0717 23:49:23.811439 1833619 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0717 23:49:23.812310 1833619 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0717 23:49:23.812384 1833619 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-856061 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-856061]
	I0717 23:49:24.310368 1833619 provision.go:172] copyRemoteCerts
	I0717 23:49:24.310518 1833619 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 23:49:24.310609 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:24.330471 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:49:24.431241 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 23:49:24.431304 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 23:49:24.461712 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 23:49:24.461797 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 23:49:24.492735 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 23:49:24.492827 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 23:49:24.521560 1833619 provision.go:86] duration metric: configureAuth took 729.219818ms
	I0717 23:49:24.521626 1833619 ubuntu.go:193] setting minikube options for container-runtime
	I0717 23:49:24.521841 1833619 config.go:182] Loaded profile config "ingress-addon-legacy-856061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 23:49:24.521952 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:24.540397 1833619 main.go:141] libmachine: Using SSH client type: native
	I0717 23:49:24.540887 1833619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34678 <nil> <nil>}
	I0717 23:49:24.540921 1833619 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 23:49:24.819860 1833619 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 23:49:24.819883 1833619 machine.go:91] provisioned docker machine in 1.452729851s
	I0717 23:49:24.819893 1833619 client.go:171] LocalClient.Create took 9.273669589s
	I0717 23:49:24.819904 1833619 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-856061" took 9.273712764s
	I0717 23:49:24.819912 1833619 start.go:300] post-start starting for "ingress-addon-legacy-856061" (driver="docker")
	I0717 23:49:24.819924 1833619 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 23:49:24.820002 1833619 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 23:49:24.820057 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:24.838377 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:49:24.933252 1833619 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 23:49:24.937175 1833619 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 23:49:24.937210 1833619 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 23:49:24.937221 1833619 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 23:49:24.937233 1833619 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 23:49:24.937248 1833619 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0717 23:49:24.937307 1833619 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0717 23:49:24.937396 1833619 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0717 23:49:24.937408 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> /etc/ssl/certs/18062262.pem
	I0717 23:49:24.937515 1833619 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 23:49:24.947867 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0717 23:49:24.976320 1833619 start.go:303] post-start completed in 156.390529ms
	I0717 23:49:24.976704 1833619 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-856061
	I0717 23:49:24.993874 1833619 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/config.json ...
	I0717 23:49:24.994144 1833619 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 23:49:24.994196 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:25.021657 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:49:25.112621 1833619 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 23:49:25.119121 1833619 start.go:128] duration metric: createHost completed in 9.575362579s
	I0717 23:49:25.119146 1833619 start.go:83] releasing machines lock for "ingress-addon-legacy-856061", held for 9.57547937s
	I0717 23:49:25.119225 1833619 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-856061
	I0717 23:49:25.137159 1833619 ssh_runner.go:195] Run: cat /version.json
	I0717 23:49:25.137198 1833619 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 23:49:25.137217 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:25.137262 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:49:25.156253 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:49:25.169488 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:49:25.380809 1833619 ssh_runner.go:195] Run: systemctl --version
	I0717 23:49:25.386442 1833619 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 23:49:25.536611 1833619 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 23:49:25.542145 1833619 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 23:49:25.566731 1833619 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 23:49:25.566860 1833619 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 23:49:25.608028 1833619 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 23:49:25.608051 1833619 start.go:466] detecting cgroup driver to use...
	I0717 23:49:25.608115 1833619 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 23:49:25.608204 1833619 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 23:49:25.627727 1833619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 23:49:25.641129 1833619 docker.go:196] disabling cri-docker service (if available) ...
	I0717 23:49:25.641231 1833619 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 23:49:25.657073 1833619 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 23:49:25.673960 1833619 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 23:49:25.773447 1833619 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 23:49:25.879806 1833619 docker.go:212] disabling docker service ...
	I0717 23:49:25.879871 1833619 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 23:49:25.903224 1833619 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 23:49:25.917104 1833619 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 23:49:26.026262 1833619 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 23:49:26.140546 1833619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 23:49:26.154279 1833619 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 23:49:26.174485 1833619 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 23:49:26.174551 1833619 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:49:26.186267 1833619 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 23:49:26.186335 1833619 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:49:26.198503 1833619 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:49:26.210432 1833619 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:49:26.222354 1833619 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 23:49:26.233752 1833619 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 23:49:26.244121 1833619 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 23:49:26.253948 1833619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 23:49:26.353267 1833619 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 23:49:26.482431 1833619 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 23:49:26.482548 1833619 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 23:49:26.487509 1833619 start.go:534] Will wait 60s for crictl version
	I0717 23:49:26.487613 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:26.491912 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 23:49:26.535412 1833619 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 23:49:26.535533 1833619 ssh_runner.go:195] Run: crio --version
	I0717 23:49:26.586998 1833619 ssh_runner.go:195] Run: crio --version
	I0717 23:49:26.631999 1833619 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0717 23:49:26.633634 1833619 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-856061 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 23:49:26.650662 1833619 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 23:49:26.655086 1833619 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 23:49:26.668055 1833619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 23:49:26.668136 1833619 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 23:49:26.723135 1833619 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 23:49:26.723211 1833619 ssh_runner.go:195] Run: which lz4
	I0717 23:49:26.727685 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0717 23:49:26.727794 1833619 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 23:49:26.732053 1833619 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 23:49:26.732088 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0717 23:49:28.853387 1833619 crio.go:444] Took 2.125634 seconds to copy over tarball
	I0717 23:49:28.853462 1833619 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 23:49:31.643471 1833619 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.789976151s)
	I0717 23:49:31.643498 1833619 crio.go:451] Took 2.790088 seconds to extract the tarball
	I0717 23:49:31.643516 1833619 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 23:49:31.804212 1833619 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 23:49:31.848058 1833619 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 23:49:31.848081 1833619 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 23:49:31.848118 1833619 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:49:31.848324 1833619 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 23:49:31.848397 1833619 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 23:49:31.848457 1833619 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 23:49:31.848526 1833619 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 23:49:31.848590 1833619 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 23:49:31.848658 1833619 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 23:49:31.848722 1833619 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 23:49:31.849919 1833619 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:49:31.850378 1833619 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 23:49:31.850702 1833619 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 23:49:31.850857 1833619 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 23:49:31.850992 1833619 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 23:49:31.851123 1833619 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 23:49:31.851245 1833619 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 23:49:31.851834 1833619 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	W0717 23:49:32.321988 1833619 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.322202 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0717 23:49:32.325171 1833619 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.325440 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0717 23:49:32.330698 1833619 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.330882 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 23:49:32.332683 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0717 23:49:32.383957 1833619 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.384190 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0717 23:49:32.391097 1833619 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.391345 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0717 23:49:32.405707 1833619 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.405955 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 23:49:32.458751 1833619 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0717 23:49:32.458804 1833619 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 23:49:32.458865 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.458948 1833619 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0717 23:49:32.458967 1833619 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 23:49:32.458989 1833619 ssh_runner.go:195] Run: which crictl
	W0717 23:49:32.466627 1833619 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 23:49:32.466782 1833619 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:49:32.537327 1833619 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0717 23:49:32.537375 1833619 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 23:49:32.537437 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.537547 1833619 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0717 23:49:32.537576 1833619 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 23:49:32.537610 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.563902 1833619 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0717 23:49:32.563957 1833619 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 23:49:32.564020 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.578125 1833619 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0717 23:49:32.578179 1833619 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 23:49:32.578228 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.596336 1833619 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0717 23:49:32.596384 1833619 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 23:49:32.596438 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.596541 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 23:49:32.596612 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 23:49:32.720955 1833619 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 23:49:32.721054 1833619 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:49:32.721105 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 23:49:32.721143 1833619 ssh_runner.go:195] Run: which crictl
	I0717 23:49:32.721201 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 23:49:32.721271 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 23:49:32.721283 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 23:49:32.721326 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 23:49:32.721380 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 23:49:32.721432 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 23:49:32.858917 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0717 23:49:32.859031 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 23:49:32.859068 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0717 23:49:32.859038 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 23:49:32.859176 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0717 23:49:32.859205 1833619 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:49:32.919027 1833619 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 23:49:32.919151 1833619 cache_images.go:92] LoadImages completed in 1.071055707s
	W0717 23:49:32.919258 1833619 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0717 23:49:32.919375 1833619 ssh_runner.go:195] Run: crio config
	I0717 23:49:32.972273 1833619 cni.go:84] Creating CNI manager for ""
	I0717 23:49:32.972295 1833619 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:49:32.972305 1833619 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 23:49:32.972349 1833619 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-856061 NodeName:ingress-addon-legacy-856061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 23:49:32.972517 1833619 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-856061"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 23:49:32.972605 1833619 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-856061 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-856061 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 23:49:32.972677 1833619 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 23:49:32.983493 1833619 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 23:49:32.983630 1833619 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 23:49:32.994181 1833619 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0717 23:49:33.018491 1833619 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 23:49:33.041913 1833619 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 23:49:33.065237 1833619 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 23:49:33.070062 1833619 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 23:49:33.084995 1833619 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061 for IP: 192.168.49.2
	I0717 23:49:33.085087 1833619 certs.go:190] acquiring lock for shared ca certs: {Name:mkb76b85951e1a7e4a78939a9bc1392aa19273b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:33.085267 1833619 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key
	I0717 23:49:33.085352 1833619 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key
	I0717 23:49:33.085413 1833619 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.key
	I0717 23:49:33.085432 1833619 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt with IP's: []
	I0717 23:49:33.884774 1833619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt ...
	I0717 23:49:33.884809 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: {Name:mk3266ae02462bc55fa23923519fb3cb7f7defc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:33.885011 1833619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.key ...
	I0717 23:49:33.885024 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.key: {Name:mkfee3407fa7239ce3c1a62112a1211a2d2a9390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:33.885121 1833619 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key.dd3b5fb2
	I0717 23:49:33.885136 1833619 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 23:49:34.191018 1833619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt.dd3b5fb2 ...
	I0717 23:49:34.191047 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt.dd3b5fb2: {Name:mk65294961a869e2b3d68c0dbac8ea54d0e2d9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:34.191230 1833619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key.dd3b5fb2 ...
	I0717 23:49:34.191246 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key.dd3b5fb2: {Name:mkc8b637c3884904aab05b2bb8347aeae1838d1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:34.191337 1833619 certs.go:337] copying /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt
	I0717 23:49:34.191416 1833619 certs.go:341] copying /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key
	I0717 23:49:34.191473 1833619 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.key
	I0717 23:49:34.191494 1833619 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.crt with IP's: []
	I0717 23:49:34.388359 1833619 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.crt ...
	I0717 23:49:34.388389 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.crt: {Name:mkd794f59a67898717c48fab26cf26b986892031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:34.388573 1833619 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.key ...
	I0717 23:49:34.388591 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.key: {Name:mk0f2046eea26f4a2a0e5b4bcaaabdea92d86d8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:49:34.388676 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 23:49:34.388696 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 23:49:34.388709 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 23:49:34.388719 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 23:49:34.388733 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 23:49:34.388748 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 23:49:34.388764 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 23:49:34.388775 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 23:49:34.388839 1833619 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem (1338 bytes)
	W0717 23:49:34.388880 1833619 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226_empty.pem, impossibly tiny 0 bytes
	I0717 23:49:34.388895 1833619 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 23:49:34.388927 1833619 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem (1082 bytes)
	I0717 23:49:34.388955 1833619 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem (1123 bytes)
	I0717 23:49:34.388986 1833619 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem (1675 bytes)
	I0717 23:49:34.389034 1833619 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem (1708 bytes)
	I0717 23:49:34.389065 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem -> /usr/share/ca-certificates/1806226.pem
	I0717 23:49:34.389081 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> /usr/share/ca-certificates/18062262.pem
	I0717 23:49:34.389096 1833619 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:49:34.389662 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 23:49:34.419369 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 23:49:34.448863 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 23:49:34.477807 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 23:49:34.506641 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 23:49:34.535842 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 23:49:34.564708 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 23:49:34.592975 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 23:49:34.621345 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem --> /usr/share/ca-certificates/1806226.pem (1338 bytes)
	I0717 23:49:34.649831 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /usr/share/ca-certificates/18062262.pem (1708 bytes)
	I0717 23:49:34.678087 1833619 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 23:49:34.707012 1833619 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 23:49:34.728557 1833619 ssh_runner.go:195] Run: openssl version
	I0717 23:49:34.735686 1833619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806226.pem && ln -fs /usr/share/ca-certificates/1806226.pem /etc/ssl/certs/1806226.pem"
	I0717 23:49:34.747586 1833619 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806226.pem
	I0717 23:49:34.752339 1833619 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 23:44 /usr/share/ca-certificates/1806226.pem
	I0717 23:49:34.752431 1833619 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806226.pem
	I0717 23:49:34.761193 1833619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806226.pem /etc/ssl/certs/51391683.0"
	I0717 23:49:34.772704 1833619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18062262.pem && ln -fs /usr/share/ca-certificates/18062262.pem /etc/ssl/certs/18062262.pem"
	I0717 23:49:34.784229 1833619 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18062262.pem
	I0717 23:49:34.788967 1833619 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 23:44 /usr/share/ca-certificates/18062262.pem
	I0717 23:49:34.789031 1833619 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18062262.pem
	I0717 23:49:34.797907 1833619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18062262.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 23:49:34.809687 1833619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 23:49:34.821384 1833619 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:49:34.826152 1833619 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:49:34.826222 1833619 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:49:34.835208 1833619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 23:49:34.847116 1833619 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 23:49:34.851794 1833619 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 23:49:34.851850 1833619 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-856061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-856061 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:49:34.851932 1833619 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 23:49:34.851997 1833619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 23:49:34.896137 1833619 cri.go:89] found id: ""
	I0717 23:49:34.896263 1833619 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 23:49:34.907413 1833619 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 23:49:34.918767 1833619 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 23:49:34.918870 1833619 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 23:49:34.930262 1833619 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 23:49:34.930361 1833619 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 23:49:34.989258 1833619 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 23:49:34.989679 1833619 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 23:49:35.044212 1833619 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 23:49:35.044357 1833619 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 23:49:35.044414 1833619 kubeadm.go:322] OS: Linux
	I0717 23:49:35.044497 1833619 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 23:49:35.044583 1833619 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 23:49:35.044660 1833619 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 23:49:35.044744 1833619 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 23:49:35.044830 1833619 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 23:49:35.044922 1833619 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 23:49:35.143820 1833619 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 23:49:35.143995 1833619 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 23:49:35.144140 1833619 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 23:49:35.392651 1833619 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 23:49:35.394277 1833619 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 23:49:35.394395 1833619 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 23:49:35.490326 1833619 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 23:49:35.495007 1833619 out.go:204]   - Generating certificates and keys ...
	I0717 23:49:35.495172 1833619 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 23:49:35.495327 1833619 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 23:49:35.863971 1833619 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 23:49:36.052053 1833619 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 23:49:36.283152 1833619 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 23:49:36.406197 1833619 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 23:49:36.747841 1833619 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 23:49:36.748282 1833619 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-856061 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 23:49:36.953465 1833619 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 23:49:36.953888 1833619 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-856061 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 23:49:37.161876 1833619 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 23:49:37.409252 1833619 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 23:49:37.901656 1833619 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 23:49:37.902168 1833619 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 23:49:38.160284 1833619 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 23:49:38.921371 1833619 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 23:49:39.491359 1833619 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 23:49:40.009858 1833619 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 23:49:40.010555 1833619 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 23:49:40.015443 1833619 out.go:204]   - Booting up control plane ...
	I0717 23:49:40.015554 1833619 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 23:49:40.020755 1833619 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 23:49:40.026555 1833619 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 23:49:40.026650 1833619 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 23:49:40.026799 1833619 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 23:49:52.528455 1833619 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502454 seconds
	I0717 23:49:52.528577 1833619 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 23:49:52.545078 1833619 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 23:49:53.063767 1833619 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 23:49:53.063934 1833619 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-856061 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 23:49:53.571620 1833619 kubeadm.go:322] [bootstrap-token] Using token: 0h0wle.6yxg33nqrguqormn
	I0717 23:49:53.573688 1833619 out.go:204]   - Configuring RBAC rules ...
	I0717 23:49:53.573802 1833619 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 23:49:53.579012 1833619 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 23:49:53.586510 1833619 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 23:49:53.589340 1833619 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 23:49:53.592113 1833619 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 23:49:53.594626 1833619 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 23:49:53.603885 1833619 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 23:49:53.912162 1833619 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 23:49:54.016304 1833619 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 23:49:54.018144 1833619 kubeadm.go:322] 
	I0717 23:49:54.018239 1833619 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 23:49:54.018252 1833619 kubeadm.go:322] 
	I0717 23:49:54.018346 1833619 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 23:49:54.018361 1833619 kubeadm.go:322] 
	I0717 23:49:54.018386 1833619 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 23:49:54.018501 1833619 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 23:49:54.018562 1833619 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 23:49:54.018573 1833619 kubeadm.go:322] 
	I0717 23:49:54.018644 1833619 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 23:49:54.018731 1833619 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 23:49:54.018842 1833619 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 23:49:54.018851 1833619 kubeadm.go:322] 
	I0717 23:49:54.018954 1833619 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 23:49:54.019082 1833619 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 23:49:54.019090 1833619 kubeadm.go:322] 
	I0717 23:49:54.019245 1833619 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0h0wle.6yxg33nqrguqormn \
	I0717 23:49:54.019393 1833619 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f \
	I0717 23:49:54.019432 1833619 kubeadm.go:322]     --control-plane 
	I0717 23:49:54.019442 1833619 kubeadm.go:322] 
	I0717 23:49:54.019548 1833619 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 23:49:54.019556 1833619 kubeadm.go:322] 
	I0717 23:49:54.019657 1833619 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0h0wle.6yxg33nqrguqormn \
	I0717 23:49:54.019777 1833619 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f 
	I0717 23:49:54.025573 1833619 kubeadm.go:322] W0717 23:49:34.988004    1230 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 23:49:54.025852 1833619 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 23:49:54.025961 1833619 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 23:49:54.026080 1833619 kubeadm.go:322] W0717 23:49:40.020413    1230 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 23:49:54.026198 1833619 kubeadm.go:322] W0717 23:49:40.022091    1230 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 23:49:54.026220 1833619 cni.go:84] Creating CNI manager for ""
	I0717 23:49:54.026231 1833619 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:49:54.028448 1833619 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 23:49:54.030284 1833619 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 23:49:54.035908 1833619 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0717 23:49:54.035931 1833619 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 23:49:54.059330 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 23:49:54.506072 1833619 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 23:49:54.506208 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:54.506290 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=ingress-addon-legacy-856061 minikube.k8s.io/updated_at=2023_07_17T23_49_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:54.524898 1833619 ops.go:34] apiserver oom_adj: -16
	I0717 23:49:54.665955 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:55.269745 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:55.769202 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:56.269411 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:56.769382 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:57.269644 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:57.769618 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:58.269767 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:58.769855 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:59.270007 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:49:59.769466 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:00.269904 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:00.769218 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:01.269921 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:01.769491 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:02.269153 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:02.769791 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:03.270158 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:03.769219 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:04.269974 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:04.769713 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:05.269451 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:05.770177 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:06.270206 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:06.770117 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:07.269128 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:07.770036 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:08.269270 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:08.770137 1833619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 23:50:08.882766 1833619 kubeadm.go:1081] duration metric: took 14.376605373s to wait for elevateKubeSystemPrivileges.
	I0717 23:50:08.882794 1833619 kubeadm.go:406] StartCluster complete in 34.030951034s
	I0717 23:50:08.882810 1833619 settings.go:142] acquiring lock: {Name:mk74b5b544da6acf33d2b75c01a65c483577bcd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:50:08.882874 1833619 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:50:08.883694 1833619 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/kubeconfig: {Name:mkabbac053a2a3ee682ab9031f228204945b972c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:50:08.884471 1833619 kapi.go:59] client config for ingress-addon-legacy-856061: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 23:50:08.885887 1833619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 23:50:08.886138 1833619 config.go:182] Loaded profile config "ingress-addon-legacy-856061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 23:50:08.886174 1833619 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 23:50:08.886232 1833619 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-856061"
	I0717 23:50:08.886245 1833619 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-856061"
	I0717 23:50:08.886299 1833619 host.go:66] Checking if "ingress-addon-legacy-856061" exists ...
	I0717 23:50:08.886934 1833619 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 23:50:08.886969 1833619 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-856061"
	I0717 23:50:08.887062 1833619 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-856061"
	I0717 23:50:08.887432 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Status}}
	I0717 23:50:08.887932 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Status}}
	I0717 23:50:08.938355 1833619 kapi.go:59] client config for ingress-addon-legacy-856061: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 23:50:08.941714 1833619 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:50:08.943466 1833619 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 23:50:08.943497 1833619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 23:50:08.943561 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:50:08.967973 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:50:08.988168 1833619 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-856061"
	I0717 23:50:08.988217 1833619 host.go:66] Checking if "ingress-addon-legacy-856061" exists ...
	I0717 23:50:08.988745 1833619 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-856061 --format={{.State.Status}}
	I0717 23:50:09.020324 1833619 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 23:50:09.020347 1833619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 23:50:09.020430 1833619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-856061
	I0717 23:50:09.050551 1833619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34678 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/ingress-addon-legacy-856061/id_rsa Username:docker}
	I0717 23:50:09.167303 1833619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 23:50:09.183438 1833619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 23:50:09.317421 1833619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 23:50:09.618082 1833619 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-856061" context rescaled to 1 replicas
	I0717 23:50:09.618137 1833619 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:50:09.620249 1833619 out.go:177] * Verifying Kubernetes components...
	I0717 23:50:09.622471 1833619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:50:09.846149 1833619 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 23:50:09.848298 1833619 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 23:50:09.847023 1833619 kapi.go:59] client config for ingress-addon-legacy-856061: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 23:50:09.850020 1833619 addons.go:502] enable addons completed in 963.837567ms: enabled=[storage-provisioner default-storageclass]
	I0717 23:50:09.850260 1833619 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-856061" to be "Ready" ...
	I0717 23:50:11.866901 1833619 node_ready.go:58] node "ingress-addon-legacy-856061" has status "Ready":"False"
	I0717 23:50:14.366647 1833619 node_ready.go:58] node "ingress-addon-legacy-856061" has status "Ready":"False"
	I0717 23:50:16.367072 1833619 node_ready.go:58] node "ingress-addon-legacy-856061" has status "Ready":"False"
	I0717 23:50:17.866060 1833619 node_ready.go:49] node "ingress-addon-legacy-856061" has status "Ready":"True"
	I0717 23:50:17.866096 1833619 node_ready.go:38] duration metric: took 8.015820577s waiting for node "ingress-addon-legacy-856061" to be "Ready" ...
	I0717 23:50:17.866110 1833619 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:50:17.873458 1833619 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:19.879061 1833619 pod_ready.go:102] pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 23:50:09 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 23:50:22.379084 1833619 pod_ready.go:102] pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 23:50:09 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 23:50:24.379323 1833619 pod_ready.go:102] pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 23:50:09 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 23:50:26.381664 1833619 pod_ready.go:102] pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace has status "Ready":"False"
	I0717 23:50:28.382382 1833619 pod_ready.go:92] pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace has status "Ready":"True"
	I0717 23:50:28.382437 1833619 pod_ready.go:81] duration metric: took 10.508944543s waiting for pod "coredns-66bff467f8-rfdvp" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.382449 1833619 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.387474 1833619 pod_ready.go:92] pod "etcd-ingress-addon-legacy-856061" in "kube-system" namespace has status "Ready":"True"
	I0717 23:50:28.387503 1833619 pod_ready.go:81] duration metric: took 5.046046ms waiting for pod "etcd-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.387520 1833619 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.392529 1833619 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-856061" in "kube-system" namespace has status "Ready":"True"
	I0717 23:50:28.392556 1833619 pod_ready.go:81] duration metric: took 5.028758ms waiting for pod "kube-apiserver-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.392567 1833619 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.397682 1833619 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-856061" in "kube-system" namespace has status "Ready":"True"
	I0717 23:50:28.397712 1833619 pod_ready.go:81] duration metric: took 5.136678ms waiting for pod "kube-controller-manager-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.397724 1833619 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7nvc" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.402860 1833619 pod_ready.go:92] pod "kube-proxy-m7nvc" in "kube-system" namespace has status "Ready":"True"
	I0717 23:50:28.402886 1833619 pod_ready.go:81] duration metric: took 5.15505ms waiting for pod "kube-proxy-m7nvc" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.402897 1833619 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.577350 1833619 request.go:628] Waited for 174.368401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-856061
	I0717 23:50:28.777326 1833619 request.go:628] Waited for 197.277167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-856061
	I0717 23:50:28.780355 1833619 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-856061" in "kube-system" namespace has status "Ready":"True"
	I0717 23:50:28.780381 1833619 pod_ready.go:81] duration metric: took 377.45561ms waiting for pod "kube-scheduler-ingress-addon-legacy-856061" in "kube-system" namespace to be "Ready" ...
	I0717 23:50:28.780394 1833619 pod_ready.go:38] duration metric: took 10.914263184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:50:28.780410 1833619 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:50:28.780472 1833619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:50:28.794612 1833619 api_server.go:72] duration metric: took 19.176426914s to wait for apiserver process to appear ...
	I0717 23:50:28.794638 1833619 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:50:28.794656 1833619 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 23:50:28.803788 1833619 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 23:50:28.804581 1833619 api_server.go:141] control plane version: v1.18.20
	I0717 23:50:28.804606 1833619 api_server.go:131] duration metric: took 9.962378ms to wait for apiserver health ...
	I0717 23:50:28.804615 1833619 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:50:28.977196 1833619 request.go:628] Waited for 172.496017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 23:50:28.983338 1833619 system_pods.go:59] 8 kube-system pods found
	I0717 23:50:28.983371 1833619 system_pods.go:61] "coredns-66bff467f8-rfdvp" [5fa4238e-c6f7-4ae7-b091-bda2244a1f4a] Running
	I0717 23:50:28.983381 1833619 system_pods.go:61] "etcd-ingress-addon-legacy-856061" [db4ff9b2-c1f7-4cc0-a523-45289097cb01] Running
	I0717 23:50:28.983386 1833619 system_pods.go:61] "kindnet-tt7jm" [e0a2cb2d-b66d-4863-ba8d-abfb548b7844] Running
	I0717 23:50:28.983391 1833619 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-856061" [1869b2bf-d6c7-4f31-a8e7-e39369dac3ba] Running
	I0717 23:50:28.983399 1833619 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-856061" [2f0c8b27-50a1-4c2f-8cf8-d5e87ce2fcc9] Running
	I0717 23:50:28.983407 1833619 system_pods.go:61] "kube-proxy-m7nvc" [0f40a3fe-2f70-4c27-acf3-ccd53eeb34e9] Running
	I0717 23:50:28.983412 1833619 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-856061" [c1355f2e-8953-46db-912a-c4e700364414] Running
	I0717 23:50:28.983417 1833619 system_pods.go:61] "storage-provisioner" [77ee2c39-83f6-491f-9f0a-18359be5044b] Running
	I0717 23:50:28.983428 1833619 system_pods.go:74] duration metric: took 178.808068ms to wait for pod list to return data ...
	I0717 23:50:28.983438 1833619 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:50:29.177711 1833619 request.go:628] Waited for 194.183881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 23:50:29.180232 1833619 default_sa.go:45] found service account: "default"
	I0717 23:50:29.180262 1833619 default_sa.go:55] duration metric: took 196.817611ms for default service account to be created ...
	I0717 23:50:29.180276 1833619 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:50:29.377691 1833619 request.go:628] Waited for 197.351579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 23:50:29.383779 1833619 system_pods.go:86] 8 kube-system pods found
	I0717 23:50:29.383811 1833619 system_pods.go:89] "coredns-66bff467f8-rfdvp" [5fa4238e-c6f7-4ae7-b091-bda2244a1f4a] Running
	I0717 23:50:29.383818 1833619 system_pods.go:89] "etcd-ingress-addon-legacy-856061" [db4ff9b2-c1f7-4cc0-a523-45289097cb01] Running
	I0717 23:50:29.383824 1833619 system_pods.go:89] "kindnet-tt7jm" [e0a2cb2d-b66d-4863-ba8d-abfb548b7844] Running
	I0717 23:50:29.383829 1833619 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-856061" [1869b2bf-d6c7-4f31-a8e7-e39369dac3ba] Running
	I0717 23:50:29.383834 1833619 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-856061" [2f0c8b27-50a1-4c2f-8cf8-d5e87ce2fcc9] Running
	I0717 23:50:29.383839 1833619 system_pods.go:89] "kube-proxy-m7nvc" [0f40a3fe-2f70-4c27-acf3-ccd53eeb34e9] Running
	I0717 23:50:29.383845 1833619 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-856061" [c1355f2e-8953-46db-912a-c4e700364414] Running
	I0717 23:50:29.383849 1833619 system_pods.go:89] "storage-provisioner" [77ee2c39-83f6-491f-9f0a-18359be5044b] Running
	I0717 23:50:29.383857 1833619 system_pods.go:126] duration metric: took 203.57422ms to wait for k8s-apps to be running ...
	I0717 23:50:29.383870 1833619 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:50:29.383931 1833619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:50:29.398844 1833619 system_svc.go:56] duration metric: took 14.962713ms WaitForService to wait for kubelet.
	I0717 23:50:29.398875 1833619 kubeadm.go:581] duration metric: took 19.780694804s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:50:29.398895 1833619 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:50:29.577331 1833619 request.go:628] Waited for 178.320911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0717 23:50:29.580228 1833619 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 23:50:29.580259 1833619 node_conditions.go:123] node cpu capacity is 2
	I0717 23:50:29.580272 1833619 node_conditions.go:105] duration metric: took 181.371571ms to run NodePressure ...
	I0717 23:50:29.580314 1833619 start.go:228] waiting for startup goroutines ...
	I0717 23:50:29.580326 1833619 start.go:233] waiting for cluster config update ...
	I0717 23:50:29.580337 1833619 start.go:242] writing updated cluster config ...
	I0717 23:50:29.580683 1833619 ssh_runner.go:195] Run: rm -f paused
	I0717 23:50:29.647091 1833619 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 23:50:29.649575 1833619 out.go:177] 
	W0717 23:50:29.652013 1833619 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 23:50:29.653931 1833619 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 23:50:29.655836 1833619 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-856061" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.426859391Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=18be2a94-4254-49c5-bede-07a48a547e47 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.427059980Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=18be2a94-4254-49c5-bede-07a48a547e47 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.428107354Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-mfxtn/hello-world-app" id=467b1068-1e47-4203-8d80-d7b01a1a37b8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.428222388Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.526829932Z" level=info msg="Created container 7f8c1d3ea8ceb7f5bbdff663274f6fd9e327c96380dbeea3b5071c968b10ad5b: default/hello-world-app-5f5d8b66bb-mfxtn/hello-world-app" id=467b1068-1e47-4203-8d80-d7b01a1a37b8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.527804372Z" level=info msg="Starting container: 7f8c1d3ea8ceb7f5bbdff663274f6fd9e327c96380dbeea3b5071c968b10ad5b" id=7781b1ce-515a-47d8-a089-93d21d023fc5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 17 23:53:39 ingress-addon-legacy-856061 conmon[3637]: conmon 7f8c1d3ea8ceb7f5bbdf <ninfo>: container 3648 exited with status 1
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.545675971Z" level=info msg="Started container" PID=3648 containerID=7f8c1d3ea8ceb7f5bbdff663274f6fd9e327c96380dbeea3b5071c968b10ad5b description=default/hello-world-app-5f5d8b66bb-mfxtn/hello-world-app id=7781b1ce-515a-47d8-a089-93d21d023fc5 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=33a10c93159c89a1a540adc7062f0adf9798b31355f97640dddc4e1ad83252d2
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.966063465Z" level=info msg="Removing container: 0ef41d6ca23435bfe10bd0c55752f7bf4ad91faccf1f612aaffc9914689ca16b" id=df3bc567-50f1-4366-ac67-a26de359d5d9 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 17 23:53:39 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:39.989455575Z" level=info msg="Removed container 0ef41d6ca23435bfe10bd0c55752f7bf4ad91faccf1f612aaffc9914689ca16b: default/hello-world-app-5f5d8b66bb-mfxtn/hello-world-app" id=df3bc567-50f1-4366-ac67-a26de359d5d9 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.389374549Z" level=warning msg="Stopping container 6d90eefea107d782e86d811485cc1ee670cc483e472cff936bbe4ef70cdf162b with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=1ddb5c6a-f27e-41c8-930f-fafb92e79f6b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 23:53:40 ingress-addon-legacy-856061 conmon[2731]: conmon 6d90eefea107d782e86d <ninfo>: container 2742 exited with status 137
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.573596478Z" level=info msg="Stopped container 6d90eefea107d782e86d811485cc1ee670cc483e472cff936bbe4ef70cdf162b: ingress-nginx/ingress-nginx-controller-7fcf777cb7-hlp8m/controller" id=d9adc529-5c9f-45d3-9333-2dda53150370 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.574170897Z" level=info msg="Stopping pod sandbox: 481c2a1c0e4d7187f5694a0e9872c1b905c8d3e0c07e94d02540c45d28525653" id=53f4a83a-e7cd-4953-84b4-462387104094 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.574663840Z" level=info msg="Stopped container 6d90eefea107d782e86d811485cc1ee670cc483e472cff936bbe4ef70cdf162b: ingress-nginx/ingress-nginx-controller-7fcf777cb7-hlp8m/controller" id=1ddb5c6a-f27e-41c8-930f-fafb92e79f6b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.575049034Z" level=info msg="Stopping pod sandbox: 481c2a1c0e4d7187f5694a0e9872c1b905c8d3e0c07e94d02540c45d28525653" id=c3589478-8047-4ec7-b742-5fe16c4bdc9d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.579590970Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-2EBKKSDLEYO7I2JA - [0:0]\n:KUBE-HP-TP6KOOVEH2DBSGWD - [0:0]\n-X KUBE-HP-2EBKKSDLEYO7I2JA\n-X KUBE-HP-TP6KOOVEH2DBSGWD\nCOMMIT\n"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.585003617Z" level=info msg="Closing host port tcp:80"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.585060199Z" level=info msg="Closing host port tcp:443"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.588714029Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.588750369Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.588908243Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-hlp8m Namespace:ingress-nginx ID:481c2a1c0e4d7187f5694a0e9872c1b905c8d3e0c07e94d02540c45d28525653 UID:f0aa0045-ec2b-4fdc-9a06-edca95fd74c9 NetNS:/var/run/netns/03d05434-0547-41bd-aaa0-46a23add7b4a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.589056171Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-hlp8m from CNI network \"kindnet\" (type=ptp)"
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.618097619Z" level=info msg="Stopped pod sandbox: 481c2a1c0e4d7187f5694a0e9872c1b905c8d3e0c07e94d02540c45d28525653" id=53f4a83a-e7cd-4953-84b4-462387104094 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 23:53:40 ingress-addon-legacy-856061 crio[895]: time="2023-07-17 23:53:40.618209101Z" level=info msg="Stopped pod sandbox (already stopped): 481c2a1c0e4d7187f5694a0e9872c1b905c8d3e0c07e94d02540c45d28525653" id=c3589478-8047-4ec7-b742-5fe16c4bdc9d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f8c1d3ea8ceb       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   6 seconds ago       Exited              hello-world-app           2                   33a10c93159c8       hello-world-app-5f5d8b66bb-mfxtn
	29aaf691ef55b       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   3f18c2dddf20f       nginx
	6d90eefea107d       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   481c2a1c0e4d7       ingress-nginx-controller-7fcf777cb7-hlp8m
	1180e202b5336       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   171cfd86f14dc       ingress-nginx-admission-patch-jgspb
	9af096e598f72       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   9cff52af6dc19       ingress-nginx-admission-create-2gvw7
	d9603f427e3bc       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   c781595a9ed70       coredns-66bff467f8-rfdvp
	d753f131379f3       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   3827b3ddcd6ef       storage-provisioner
	461fbd9a829da       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   9b29ce03d482f       kindnet-tt7jm
	1e64907289cb1       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   1bbc1940e4b2b       kube-proxy-m7nvc
	96413ef818265       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   4a03eb2a31312       kube-scheduler-ingress-addon-legacy-856061
	0f9948bfbad88       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   d7173ceb560c8       etcd-ingress-addon-legacy-856061
	fa0c07129fa19       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   c7238ddad2f7f       kube-controller-manager-ingress-addon-legacy-856061
	08e51c6d2582c       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   2d6c0d98d066e       kube-apiserver-ingress-addon-legacy-856061
	
	* 
	* ==> coredns [d9603f427e3bc3822f087f77178c2ba334fb41e3b6e03f917cee5e025b55af15] <==
	* [INFO] 10.244.0.5:59369 - 32351 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000159695s
	[INFO] 10.244.0.5:59369 - 19431 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000801264s
	[INFO] 10.244.0.5:41375 - 48557 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002044344s
	[INFO] 10.244.0.5:41375 - 6545 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00232257s
	[INFO] 10.244.0.5:59369 - 55030 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002348768s
	[INFO] 10.244.0.5:59369 - 33998 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000142046s
	[INFO] 10.244.0.5:41375 - 16317 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035709s
	[INFO] 10.244.0.5:60241 - 12510 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085587s
	[INFO] 10.244.0.5:56030 - 21463 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003835s
	[INFO] 10.244.0.5:56030 - 6434 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000041616s
	[INFO] 10.244.0.5:56030 - 14315 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033403s
	[INFO] 10.244.0.5:56030 - 38155 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035765s
	[INFO] 10.244.0.5:56030 - 61967 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031188s
	[INFO] 10.244.0.5:60241 - 35429 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038522s
	[INFO] 10.244.0.5:60241 - 46518 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036315s
	[INFO] 10.244.0.5:56030 - 36179 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002418s
	[INFO] 10.244.0.5:60241 - 15983 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035487s
	[INFO] 10.244.0.5:60241 - 32040 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005833s
	[INFO] 10.244.0.5:60241 - 1920 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038654s
	[INFO] 10.244.0.5:56030 - 22191 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001190396s
	[INFO] 10.244.0.5:60241 - 46433 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002089801s
	[INFO] 10.244.0.5:56030 - 4922 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001604193s
	[INFO] 10.244.0.5:56030 - 10695 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004402s
	[INFO] 10.244.0.5:60241 - 56474 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000837596s
	[INFO] 10.244.0.5:60241 - 14188 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005248s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-856061
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-856061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=ingress-addon-legacy-856061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T23_49_54_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 23:49:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-856061
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:53:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:53:27 +0000   Mon, 17 Jul 2023 23:49:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:53:27 +0000   Mon, 17 Jul 2023 23:49:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:53:27 +0000   Mon, 17 Jul 2023 23:49:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:53:27 +0000   Mon, 17 Jul 2023 23:50:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-856061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec0c36cb87b541a18f001886630c12a4
	  System UUID:                0f1c28b3-6afe-4564-b054-fbdf49ba700c
	  Boot ID:                    233fb95c-536d-4fc4-882b-c04fac35e1a2
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-mfxtn                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-rfdvp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m38s
	  kube-system                 etcd-ingress-addon-legacy-856061                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-tt7jm                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-856061             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-856061    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-m7nvc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-scheduler-ingress-addon-legacy-856061             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m4s (x5 over 4m4s)  kubelet     Node ingress-addon-legacy-856061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x5 over 4m4s)  kubelet     Node ingress-addon-legacy-856061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x5 over 4m4s)  kubelet     Node ingress-addon-legacy-856061 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m49s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s                kubelet     Node ingress-addon-legacy-856061 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s                kubelet     Node ingress-addon-legacy-856061 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s                kubelet     Node ingress-addon-legacy-856061 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                kubelet     Node ingress-addon-legacy-856061 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001025] FS-Cache: O-key=[8] '8b663b0000000000'
	[  +0.000680] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000a000c51
	[  +0.001041] FS-Cache: N-key=[8] '8b663b0000000000'
	[  +0.002357] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=000000005904d9c7
	[  +0.001053] FS-Cache: O-key=[8] '8b663b0000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001085] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=00000000c127b604
	[  +0.001087] FS-Cache: N-key=[8] '8b663b0000000000'
	[  +3.135902] FS-Cache: Duplicate cookie detected
	[  +0.000798] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000945] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=000000000e847315
	[  +0.001098] FS-Cache: O-key=[8] '8a663b0000000000'
	[  +0.000779] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000a000c51
	[  +0.001045] FS-Cache: N-key=[8] '8a663b0000000000'
	[  +0.290809] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=0000000051211df6
	[  +0.001103] FS-Cache: O-key=[8] '90663b0000000000'
	[  +0.000713] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=00000000e78c3482
	[  +0.001059] FS-Cache: N-key=[8] '90663b0000000000'
	
	* 
	* ==> etcd [0f9948bfbad8862daafc1277f141dfa0cd3b035f023f2d11b1da3542fbe1c62b] <==
	* raft2023/07/17 23:49:45 INFO: aec36adc501070cc became follower at term 0
	raft2023/07/17 23:49:45 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/17 23:49:45 INFO: aec36adc501070cc became follower at term 1
	raft2023/07/17 23:49:45 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 23:49:45.909580 W | auth: simple token is not cryptographically signed
	2023-07-17 23:49:46.002712 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-17 23:49:46.006136 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 23:49:46.006376 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 23:49:46.006707 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 23:49:46.007248 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/07/17 23:49:46 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 23:49:46.007652 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/07/17 23:49:46 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/17 23:49:46 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/17 23:49:46 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/17 23:49:46 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/17 23:49:46 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-17 23:49:46.613196 I | etcdserver: published {Name:ingress-addon-legacy-856061 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-17 23:49:46.613385 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 23:49:46.613454 I | embed: ready to serve client requests
	2023-07-17 23:49:46.614870 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-17 23:49:46.614932 I | embed: ready to serve client requests
	2023-07-17 23:49:46.616255 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 23:49:46.630508 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 23:49:46.630632 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  23:53:46 up  8:36,  0 users,  load average: 0.38, 0.97, 1.64
	Linux ingress-addon-legacy-856061 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [461fbd9a829da3549431a70c84db1d8a043744128e2433bacdd340ea1c1de758] <==
	* I0717 23:51:44.200140       1 main.go:227] handling current node
	I0717 23:51:54.203582       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:51:54.203615       1 main.go:227] handling current node
	I0717 23:52:04.213805       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:52:04.213922       1 main.go:227] handling current node
	I0717 23:52:14.217279       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:52:14.217311       1 main.go:227] handling current node
	I0717 23:52:24.221725       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:52:24.221757       1 main.go:227] handling current node
	I0717 23:52:34.229492       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:52:34.229524       1 main.go:227] handling current node
	I0717 23:52:44.235508       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:52:44.235540       1 main.go:227] handling current node
	I0717 23:52:54.247627       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:52:54.247657       1 main.go:227] handling current node
	I0717 23:53:04.254709       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:53:04.254738       1 main.go:227] handling current node
	I0717 23:53:14.258131       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:53:14.258290       1 main.go:227] handling current node
	I0717 23:53:24.269520       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:53:24.269551       1 main.go:227] handling current node
	I0717 23:53:34.273409       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:53:34.273444       1 main.go:227] handling current node
	I0717 23:53:44.284415       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 23:53:44.284448       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [08e51c6d2582c924116d4a63f4455e23a792052f6f64bc17b9cce588733afd11] <==
	* E0717 23:49:50.643317       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 23:49:50.775542       1 cache.go:39] Caches are synced for autoregister controller
	I0717 23:49:50.776160       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 23:49:50.776460       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 23:49:50.778680       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 23:49:50.784954       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 23:49:51.574275       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 23:49:51.574328       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 23:49:51.580135       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 23:49:51.584161       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 23:49:51.584179       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 23:49:52.026170       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 23:49:52.076832       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 23:49:52.166723       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 23:49:52.167734       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 23:49:52.171368       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 23:49:52.968423       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 23:49:53.883634       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 23:49:53.994645       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 23:49:57.307184       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 23:50:08.911712       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 23:50:08.954160       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 23:50:30.507576       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 23:51:00.995188       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0717 23:53:38.389100       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [fa0c07129fa1990c3858654be4813b248e5dfc94516b7585c1f2182f41482870] <==
	* I0717 23:50:09.066642       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 23:50:09.084460       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"90135c1d-c53b-40bf-8f5e-0a2496cf5664", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-4szfn
	I0717 23:50:09.085197       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"a2e62659-ed7a-4a9a-bfa7-a17894addf7d", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m7nvc
	I0717 23:50:09.106505       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0717 23:50:09.116650       1 range_allocator.go:373] Set node ingress-addon-legacy-856061 PodCIDR to [10.244.0.0/24]
	I0717 23:50:09.118204       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"3485e396-c1d4-431f-8f6d-330d43b49d2c", APIVersion:"apps/v1", ResourceVersion:"235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-tt7jm
	I0717 23:50:09.119316       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 23:50:09.119396       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 23:50:09.135370       1 shared_informer.go:230] Caches are synced for garbage collector 
	E0717 23:50:09.266377       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"a2e62659-ed7a-4a9a-bfa7-a17894addf7d", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63825234593, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000b9ad80), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000b9ada0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000b9adc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000d5a0c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000b9ade0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000b9ae00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000b9ae80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40005669b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400050d7a8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002b3810), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400143e538)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400050d818)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0717 23:50:09.280842       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"3485e396-c1d4-431f-8f6d-330d43b49d2c", ResourceVersion:"235", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63825234594, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230511-dc714da8\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000b9aee0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000b9af00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000b9af20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000b9af40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000b9af60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000b9af80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230511-dc714da8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000b9afa0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000b9afe0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000566b40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400050de98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002b3880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400143e588)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400050df80)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0717 23:50:09.495782       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ae72c8ff-3ddb-4bd3-9f2f-2c7a975f5e54", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	E0717 23:50:09.564451       1 replica_set.go:536] sync "kube-system/coredns-66bff467f8" failed with Operation cannot be fulfilled on replicasets.apps "coredns-66bff467f8": the object has been modified; please apply your changes to the latest version and try again
	I0717 23:50:09.673121       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"90135c1d-c53b-40bf-8f5e-0a2496cf5664", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-4szfn
	I0717 23:50:18.976925       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0717 23:50:30.501283       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"57f3991e-b383-4256-8aae-e152dc9537c9", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 23:50:30.522702       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"87032f15-8768-48ec-b43e-789beaf0ad72", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hlp8m
	I0717 23:50:30.566767       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"04c03f44-41b8-4bb2-a20d-995f70d86f4e", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-2gvw7
	I0717 23:50:30.600113       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9a5041d4-754b-4a1b-96f2-87ab4ae26100", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-jgspb
	I0717 23:50:33.640386       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"04c03f44-41b8-4bb2-a20d-995f70d86f4e", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 23:50:34.646322       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9a5041d4-754b-4a1b-96f2-87ab4ae26100", APIVersion:"batch/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 23:53:20.548279       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"2478cfa2-c7d2-483f-8f45-f87533b9761e", APIVersion:"apps/v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 23:53:20.572742       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"c90b33ac-703f-43e3-9f31-b69b7020ed66", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-mfxtn
	
	* 
	* ==> kube-proxy [1e64907289cb1301e7969f2591e9c402730d6ff7e5a8b965ae261875e2543398] <==
	* W0717 23:50:09.910665       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 23:50:09.921366       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0717 23:50:09.921414       1 server_others.go:186] Using iptables Proxier.
	I0717 23:50:09.922310       1 server.go:583] Version: v1.18.20
	I0717 23:50:09.923690       1 config.go:315] Starting service config controller
	I0717 23:50:09.923707       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 23:50:09.923755       1 config.go:133] Starting endpoints config controller
	I0717 23:50:09.923768       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 23:50:10.023996       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0717 23:50:10.024008       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [96413ef8182656271aaea2855ef9a21405ff701ceb5c1d372cac7d80a9f814a8] <==
	* W0717 23:49:50.697323       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 23:49:50.697354       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 23:49:50.697397       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 23:49:50.721331       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 23:49:50.721419       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 23:49:50.723807       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 23:49:50.723886       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 23:49:50.726314       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 23:49:50.727634       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 23:49:50.728997       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 23:49:50.734722       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 23:49:50.734731       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 23:49:50.734811       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 23:49:50.734873       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 23:49:50.734931       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 23:49:50.734985       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 23:49:50.735054       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 23:49:50.735119       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 23:49:50.735173       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 23:49:50.736684       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 23:49:50.736750       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 23:49:51.635409       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 23:49:54.224654       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0717 23:50:09.674662       1 factory.go:509] A pod kube-system/coredns-66bff467f8-4szfn no longer exists
	E0717 23:50:09.866213       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Jul 17 23:53:24 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:24.940363    1603 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ef41d6ca23435bfe10bd0c55752f7bf4ad91faccf1f612aaffc9914689ca16b
	Jul 17 23:53:24 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:24.940592    1603 pod_workers.go:191] Error syncing pod 24fd9d5e-921a-4065-a690-b8637946c9eb ("hello-world-app-5f5d8b66bb-mfxtn_default(24fd9d5e-921a-4065-a690-b8637946c9eb)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mfxtn_default(24fd9d5e-921a-4065-a690-b8637946c9eb)"
	Jul 17 23:53:25 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:25.942853    1603 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ef41d6ca23435bfe10bd0c55752f7bf4ad91faccf1f612aaffc9914689ca16b
	Jul 17 23:53:25 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:25.943109    1603 pod_workers.go:191] Error syncing pod 24fd9d5e-921a-4065-a690-b8637946c9eb ("hello-world-app-5f5d8b66bb-mfxtn_default(24fd9d5e-921a-4065-a690-b8637946c9eb)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mfxtn_default(24fd9d5e-921a-4065-a690-b8637946c9eb)"
	Jul 17 23:53:27 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:27.425645    1603 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 23:53:27 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:27.425683    1603 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 23:53:27 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:27.425729    1603 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 23:53:27 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:27.425762    1603 pod_workers.go:191] Error syncing pod ecd00e83-edb1-443f-97a8-f60406caf973 ("kube-ingress-dns-minikube_kube-system(ecd00e83-edb1-443f-97a8-f60406caf973)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 23:53:36 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:36.490832    1603 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-kpkp9" (UniqueName: "kubernetes.io/secret/ecd00e83-edb1-443f-97a8-f60406caf973-minikube-ingress-dns-token-kpkp9") pod "ecd00e83-edb1-443f-97a8-f60406caf973" (UID: "ecd00e83-edb1-443f-97a8-f60406caf973")
	Jul 17 23:53:36 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:36.495355    1603 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd00e83-edb1-443f-97a8-f60406caf973-minikube-ingress-dns-token-kpkp9" (OuterVolumeSpecName: "minikube-ingress-dns-token-kpkp9") pod "ecd00e83-edb1-443f-97a8-f60406caf973" (UID: "ecd00e83-edb1-443f-97a8-f60406caf973"). InnerVolumeSpecName "minikube-ingress-dns-token-kpkp9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 23:53:36 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:36.591210    1603 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-kpkp9" (UniqueName: "kubernetes.io/secret/ecd00e83-edb1-443f-97a8-f60406caf973-minikube-ingress-dns-token-kpkp9") on node "ingress-addon-legacy-856061" DevicePath ""
	Jul 17 23:53:37 ingress-addon-legacy-856061 kubelet[1603]: W0717 23:53:37.959636    1603 pod_container_deletor.go:77] Container "5b664a44c081bcd22461288194ef68d44426e2dbe7269513c9f5cf4745ac18cc" not found in pod's containers
	Jul 17 23:53:38 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:38.372612    1603 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hlp8m.1772ccd35bf77dbf", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hlp8m", UID:"f0aa0045-ec2b-4fdc-9a06-edca95fd74c9", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-856061"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12593009614e9bf, ext:224527575152, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12593009614e9bf, ext:224527575152, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hlp8m.1772ccd35bf77dbf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 23:53:38 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:38.392048    1603 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hlp8m.1772ccd35bf77dbf", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hlp8m", UID:"f0aa0045-ec2b-4fdc-9a06-edca95fd74c9", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-856061"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12593009614e9bf, ext:224527575152, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc125930096fbe40c, ext:224542712518, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hlp8m.1772ccd35bf77dbf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 23:53:39 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:39.424412    1603 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ef41d6ca23435bfe10bd0c55752f7bf4ad91faccf1f612aaffc9914689ca16b
	Jul 17 23:53:39 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:39.964012    1603 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ef41d6ca23435bfe10bd0c55752f7bf4ad91faccf1f612aaffc9914689ca16b
	Jul 17 23:53:39 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:39.964284    1603 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7f8c1d3ea8ceb7f5bbdff663274f6fd9e327c96380dbeea3b5071c968b10ad5b
	Jul 17 23:53:39 ingress-addon-legacy-856061 kubelet[1603]: E0717 23:53:39.964584    1603 pod_workers.go:191] Error syncing pod 24fd9d5e-921a-4065-a690-b8637946c9eb ("hello-world-app-5f5d8b66bb-mfxtn_default(24fd9d5e-921a-4065-a690-b8637946c9eb)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mfxtn_default(24fd9d5e-921a-4065-a690-b8637946c9eb)"
	Jul 17 23:53:40 ingress-addon-legacy-856061 kubelet[1603]: W0717 23:53:40.966710    1603 pod_container_deletor.go:77] Container "481c2a1c0e4d7187f5694a0e9872c1b905c8d3e0c07e94d02540c45d28525653" not found in pod's containers
	Jul 17 23:53:42 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:42.509827    1603 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-cvth4" (UniqueName: "kubernetes.io/secret/f0aa0045-ec2b-4fdc-9a06-edca95fd74c9-ingress-nginx-token-cvth4") pod "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9" (UID: "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9")
	Jul 17 23:53:42 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:42.509891    1603 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f0aa0045-ec2b-4fdc-9a06-edca95fd74c9-webhook-cert") pod "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9" (UID: "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9")
	Jul 17 23:53:42 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:42.519852    1603 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0aa0045-ec2b-4fdc-9a06-edca95fd74c9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9" (UID: "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 23:53:42 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:42.520138    1603 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0aa0045-ec2b-4fdc-9a06-edca95fd74c9-ingress-nginx-token-cvth4" (OuterVolumeSpecName: "ingress-nginx-token-cvth4") pod "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9" (UID: "f0aa0045-ec2b-4fdc-9a06-edca95fd74c9"). InnerVolumeSpecName "ingress-nginx-token-cvth4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 23:53:42 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:42.610239    1603 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f0aa0045-ec2b-4fdc-9a06-edca95fd74c9-webhook-cert") on node "ingress-addon-legacy-856061" DevicePath ""
	Jul 17 23:53:42 ingress-addon-legacy-856061 kubelet[1603]: I0717 23:53:42.610291    1603 reconciler.go:319] Volume detached for volume "ingress-nginx-token-cvth4" (UniqueName: "kubernetes.io/secret/f0aa0045-ec2b-4fdc-9a06-edca95fd74c9-ingress-nginx-token-cvth4") on node "ingress-addon-legacy-856061" DevicePath ""
	
	* 
	* ==> storage-provisioner [d753f131379f3d8ae8cebebc72cf080352d5a80d31db91a016af4a4cf2f64b19] <==
	* I0717 23:50:23.102980       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 23:50:23.117416       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 23:50:23.117478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 23:50:23.124801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 23:50:23.125325       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c88287b1-7430-4c50-83c8-842a83f1561b", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-856061_9ec0dc6b-85b2-4c80-8f0e-75aba2756449 became leader
	I0717 23:50:23.125584       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-856061_9ec0dc6b-85b2-4c80-8f0e-75aba2756449!
	I0717 23:50:23.226075       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-856061_9ec0dc6b-85b2-4c80-8f0e-75aba2756449!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-856061 -n ingress-addon-legacy-856061
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-856061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-d4jjr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-d4jjr -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-d4jjr -- sh -c "ping -c 1 192.168.58.1": exit status 1 (229.120942ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-d4jjr): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-qfp74 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-qfp74 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-qfp74 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (224.375012ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-qfp74): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-451668
helpers_test.go:235: (dbg) docker inspect multinode-451668:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149",
	        "Created": "2023-07-18T00:00:11.197533298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1870544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-18T00:00:11.520123037Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/hostname",
	        "HostsPath": "/var/lib/docker/containers/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/hosts",
	        "LogPath": "/var/lib/docker/containers/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149-json.log",
	        "Name": "/multinode-451668",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-451668:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-451668",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f29e5f82863e11fc8e6d561709bb26e2eff44b0c3847fd6942e1c83fc77739de-init/diff:/var/lib/docker/overlay2/fb8637673150b5a3287a0dca2348bba5adfe3231dd83829c5a54b472b17aad64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f29e5f82863e11fc8e6d561709bb26e2eff44b0c3847fd6942e1c83fc77739de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f29e5f82863e11fc8e6d561709bb26e2eff44b0c3847fd6942e1c83fc77739de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f29e5f82863e11fc8e6d561709bb26e2eff44b0c3847fd6942e1c83fc77739de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-451668",
	                "Source": "/var/lib/docker/volumes/multinode-451668/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-451668",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-451668",
	                "name.minikube.sigs.k8s.io": "multinode-451668",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e422efdfb5dfb48fdc32323adfa393e47bc266bf047e4a0e00b6f2899749088",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34738"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34737"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34734"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34736"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34735"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3e422efdfb5d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-451668": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "865d9e37b02c",
	                        "multinode-451668"
	                    ],
	                    "NetworkID": "36f82de40cc241b64d28e16fc068c78c040a6d1844e3baa226ed1ba3da4e8d57",
	                    "EndpointID": "242b3de7dfe1565bef046f5a3c64be08479020d87c9802ad381e62b61d25366a",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-451668 -n multinode-451668
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-451668 logs -n 25: (1.722870668s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-244072                           | mount-start-2-244072 | jenkins | v1.31.0 | 17 Jul 23 23:59 UTC | 17 Jul 23 23:59 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-244072 ssh -- ls                    | mount-start-2-244072 | jenkins | v1.31.0 | 17 Jul 23 23:59 UTC | 17 Jul 23 23:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-242423                           | mount-start-1-242423 | jenkins | v1.31.0 | 17 Jul 23 23:59 UTC | 17 Jul 23 23:59 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-244072 ssh -- ls                    | mount-start-2-244072 | jenkins | v1.31.0 | 17 Jul 23 23:59 UTC | 17 Jul 23 23:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-244072                           | mount-start-2-244072 | jenkins | v1.31.0 | 17 Jul 23 23:59 UTC | 17 Jul 23 23:59 UTC |
	| start   | -p mount-start-2-244072                           | mount-start-2-244072 | jenkins | v1.31.0 | 17 Jul 23 23:59 UTC | 18 Jul 23 00:00 UTC |
	| ssh     | mount-start-2-244072 ssh -- ls                    | mount-start-2-244072 | jenkins | v1.31.0 | 18 Jul 23 00:00 UTC | 18 Jul 23 00:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-244072                           | mount-start-2-244072 | jenkins | v1.31.0 | 18 Jul 23 00:00 UTC | 18 Jul 23 00:00 UTC |
	| delete  | -p mount-start-1-242423                           | mount-start-1-242423 | jenkins | v1.31.0 | 18 Jul 23 00:00 UTC | 18 Jul 23 00:00 UTC |
	| start   | -p multinode-451668                               | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:00 UTC | 18 Jul 23 00:02 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- apply -f                   | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- rollout                    | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- get pods -o                | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- get pods -o                | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-d4jjr --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-qfp74 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-d4jjr --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-qfp74 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-d4jjr -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-qfp74 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- get pods -o                | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-d4jjr                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC |                     |
	|         | busybox-67b7f59bb-d4jjr -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC | 18 Jul 23 00:02 UTC |
	|         | busybox-67b7f59bb-qfp74                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-451668 -- exec                       | multinode-451668     | jenkins | v1.31.0 | 18 Jul 23 00:02 UTC |                     |
	|         | busybox-67b7f59bb-qfp74 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/18 00:00:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 00:00:05.937345 1870087 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:00:05.937467 1870087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:00:05.937472 1870087 out.go:309] Setting ErrFile to fd 2...
	I0718 00:00:05.937480 1870087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:00:05.937760 1870087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:00:05.938171 1870087 out.go:303] Setting JSON to false
	I0718 00:00:05.939168 1870087 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31350,"bootTime":1689607056,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0718 00:00:05.939241 1870087 start.go:138] virtualization:  
	I0718 00:00:05.942061 1870087 out.go:177] * [multinode-451668] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0718 00:00:05.945311 1870087 out.go:177]   - MINIKUBE_LOCATION=16899
	I0718 00:00:05.947139 1870087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 00:00:05.945524 1870087 notify.go:220] Checking for updates...
	I0718 00:00:05.951840 1870087 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:00:05.953846 1870087 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0718 00:00:05.955664 1870087 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0718 00:00:05.957606 1870087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 00:00:05.960516 1870087 driver.go:373] Setting default libvirt URI to qemu:///system
	I0718 00:00:05.983913 1870087 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0718 00:00:05.984011 1870087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:00:06.077494 1870087 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-18 00:00:06.066984797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:00:06.077627 1870087 docker.go:294] overlay module found
	I0718 00:00:06.081742 1870087 out.go:177] * Using the docker driver based on user configuration
	I0718 00:00:06.083866 1870087 start.go:298] selected driver: docker
	I0718 00:00:06.083896 1870087 start.go:880] validating driver "docker" against <nil>
	I0718 00:00:06.083912 1870087 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 00:00:06.084562 1870087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:00:06.151330 1870087 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-18 00:00:06.141457428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:00:06.151499 1870087 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0718 00:00:06.151721 1870087 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 00:00:06.154057 1870087 out.go:177] * Using Docker driver with root privileges
	I0718 00:00:06.155895 1870087 cni.go:84] Creating CNI manager for ""
	I0718 00:00:06.155913 1870087 cni.go:137] 0 nodes found, recommending kindnet
	I0718 00:00:06.155927 1870087 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 00:00:06.155943 1870087 start_flags.go:319] config:
	{Name:multinode-451668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:00:06.157810 1870087 out.go:177] * Starting control plane node multinode-451668 in cluster multinode-451668
	I0718 00:00:06.159807 1870087 cache.go:122] Beginning downloading kic base image for docker with crio
	I0718 00:00:06.161561 1870087 out.go:177] * Pulling base image ...
	I0718 00:00:06.163333 1870087 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0718 00:00:06.163390 1870087 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0718 00:00:06.163402 1870087 cache.go:57] Caching tarball of preloaded images
	I0718 00:00:06.163402 1870087 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0718 00:00:06.163474 1870087 preload.go:174] Found /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0718 00:00:06.163483 1870087 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0718 00:00:06.163889 1870087 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/config.json ...
	I0718 00:00:06.163922 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/config.json: {Name:mkd97d71cb983eccf2bc5a750b27e6ba7b8cbeb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:06.180655 1870087 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0718 00:00:06.180676 1870087 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0718 00:00:06.180700 1870087 cache.go:195] Successfully downloaded all kic artifacts
	I0718 00:00:06.180766 1870087 start.go:365] acquiring machines lock for multinode-451668: {Name:mk5471169e834a283cfabcb1a1b1694c33d0e810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:00:06.180887 1870087 start.go:369] acquired machines lock for "multinode-451668" in 103.81µs
	I0718 00:00:06.180915 1870087 start.go:93] Provisioning new machine with config: &{Name:multinode-451668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0718 00:00:06.181000 1870087 start.go:125] createHost starting for "" (driver="docker")
	I0718 00:00:06.183364 1870087 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 00:00:06.183618 1870087 start.go:159] libmachine.API.Create for "multinode-451668" (driver="docker")
	I0718 00:00:06.183642 1870087 client.go:168] LocalClient.Create starting
	I0718 00:00:06.183744 1870087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem
	I0718 00:00:06.183779 1870087 main.go:141] libmachine: Decoding PEM data...
	I0718 00:00:06.183794 1870087 main.go:141] libmachine: Parsing certificate...
	I0718 00:00:06.183854 1870087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem
	I0718 00:00:06.183872 1870087 main.go:141] libmachine: Decoding PEM data...
	I0718 00:00:06.183883 1870087 main.go:141] libmachine: Parsing certificate...
	I0718 00:00:06.184228 1870087 cli_runner.go:164] Run: docker network inspect multinode-451668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 00:00:06.201274 1870087 cli_runner.go:211] docker network inspect multinode-451668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 00:00:06.201363 1870087 network_create.go:281] running [docker network inspect multinode-451668] to gather additional debugging logs...
	I0718 00:00:06.201383 1870087 cli_runner.go:164] Run: docker network inspect multinode-451668
	W0718 00:00:06.218354 1870087 cli_runner.go:211] docker network inspect multinode-451668 returned with exit code 1
	I0718 00:00:06.218388 1870087 network_create.go:284] error running [docker network inspect multinode-451668]: docker network inspect multinode-451668: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-451668 not found
	I0718 00:00:06.218400 1870087 network_create.go:286] output of [docker network inspect multinode-451668]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-451668 not found
	
	** /stderr **
	I0718 00:00:06.218526 1870087 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 00:00:06.236475 1870087 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a9366c9ca7aa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:79:c8:ee:aa} reservation:<nil>}
	I0718 00:00:06.236873 1870087 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000c44640}
	I0718 00:00:06.236894 1870087 network_create.go:123] attempt to create docker network multinode-451668 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0718 00:00:06.236952 1870087 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-451668 multinode-451668
	I0718 00:00:06.305436 1870087 network_create.go:107] docker network multinode-451668 192.168.58.0/24 created
	I0718 00:00:06.305466 1870087 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-451668" container
	I0718 00:00:06.305540 1870087 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 00:00:06.324227 1870087 cli_runner.go:164] Run: docker volume create multinode-451668 --label name.minikube.sigs.k8s.io=multinode-451668 --label created_by.minikube.sigs.k8s.io=true
	I0718 00:00:06.342367 1870087 oci.go:103] Successfully created a docker volume multinode-451668
	I0718 00:00:06.342519 1870087 cli_runner.go:164] Run: docker run --rm --name multinode-451668-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-451668 --entrypoint /usr/bin/test -v multinode-451668:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0718 00:00:06.955412 1870087 oci.go:107] Successfully prepared a docker volume multinode-451668
	I0718 00:00:06.955458 1870087 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0718 00:00:06.955478 1870087 kic.go:190] Starting extracting preloaded images to volume ...
	I0718 00:00:06.955582 1870087 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-451668:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 00:00:11.105021 1870087 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-451668:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.149393794s)
	I0718 00:00:11.105058 1870087 kic.go:199] duration metric: took 4.149575 seconds to extract preloaded images to volume
	W0718 00:00:11.105214 1870087 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0718 00:00:11.105348 1870087 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0718 00:00:11.181611 1870087 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-451668 --name multinode-451668 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-451668 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-451668 --network multinode-451668 --ip 192.168.58.2 --volume multinode-451668:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0718 00:00:11.528863 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Running}}
	I0718 00:00:11.561962 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:00:11.589096 1870087 cli_runner.go:164] Run: docker exec multinode-451668 stat /var/lib/dpkg/alternatives/iptables
	I0718 00:00:11.660774 1870087 oci.go:144] the created container "multinode-451668" has a running status.
	I0718 00:00:11.660810 1870087 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa...
	I0718 00:00:12.006256 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0718 00:00:12.006311 1870087 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0718 00:00:12.036066 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:00:12.060104 1870087 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0718 00:00:12.060130 1870087 kic_runner.go:114] Args: [docker exec --privileged multinode-451668 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0718 00:00:12.152853 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:00:12.177494 1870087 machine.go:88] provisioning docker machine ...
	I0718 00:00:12.177524 1870087 ubuntu.go:169] provisioning hostname "multinode-451668"
	I0718 00:00:12.177593 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:12.208491 1870087 main.go:141] libmachine: Using SSH client type: native
	I0718 00:00:12.208956 1870087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34738 <nil> <nil>}
	I0718 00:00:12.208969 1870087 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-451668 && echo "multinode-451668" | sudo tee /etc/hostname
	I0718 00:00:12.209794 1870087 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0718 00:00:15.353836 1870087 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-451668
	
	I0718 00:00:15.353919 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:15.372842 1870087 main.go:141] libmachine: Using SSH client type: native
	I0718 00:00:15.373292 1870087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34738 <nil> <nil>}
	I0718 00:00:15.373316 1870087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-451668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-451668/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-451668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 00:00:15.503716 1870087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 00:00:15.503749 1870087 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0718 00:00:15.503777 1870087 ubuntu.go:177] setting up certificates
	I0718 00:00:15.503787 1870087 provision.go:83] configureAuth start
	I0718 00:00:15.503852 1870087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668
	I0718 00:00:15.524396 1870087 provision.go:138] copyHostCerts
	I0718 00:00:15.524443 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:00:15.524486 1870087 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0718 00:00:15.524497 1870087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:00:15.524573 1870087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0718 00:00:15.524653 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:00:15.524677 1870087 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0718 00:00:15.524685 1870087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:00:15.524712 1870087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0718 00:00:15.524756 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:00:15.524775 1870087 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0718 00:00:15.524787 1870087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:00:15.524811 1870087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0718 00:00:15.524935 1870087 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.multinode-451668 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-451668]
	I0718 00:00:15.885688 1870087 provision.go:172] copyRemoteCerts
	I0718 00:00:15.885780 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 00:00:15.885828 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:15.904219 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:16.002553 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 00:00:16.002640 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0718 00:00:16.032722 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 00:00:16.032781 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 00:00:16.061279 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 00:00:16.061342 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 00:00:16.089751 1870087 provision.go:86] duration metric: configureAuth took 585.947834ms
	I0718 00:00:16.089776 1870087 ubuntu.go:193] setting minikube options for container-runtime
	I0718 00:00:16.089976 1870087 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:00:16.090094 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:16.108074 1870087 main.go:141] libmachine: Using SSH client type: native
	I0718 00:00:16.108521 1870087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34738 <nil> <nil>}
	I0718 00:00:16.108544 1870087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0718 00:00:16.347440 1870087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0718 00:00:16.347459 1870087 machine.go:91] provisioned docker machine in 4.169946646s
	I0718 00:00:16.347469 1870087 client.go:171] LocalClient.Create took 10.163817075s
	I0718 00:00:16.347482 1870087 start.go:167] duration metric: libmachine.API.Create for "multinode-451668" took 10.163865633s
	I0718 00:00:16.347490 1870087 start.go:300] post-start starting for "multinode-451668" (driver="docker")
	I0718 00:00:16.347499 1870087 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 00:00:16.347588 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 00:00:16.347634 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:16.368117 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:16.471061 1870087 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 00:00:16.475211 1870087 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0718 00:00:16.475234 1870087 command_runner.go:130] > NAME="Ubuntu"
	I0718 00:00:16.475242 1870087 command_runner.go:130] > VERSION_ID="22.04"
	I0718 00:00:16.475248 1870087 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0718 00:00:16.475257 1870087 command_runner.go:130] > VERSION_CODENAME=jammy
	I0718 00:00:16.475261 1870087 command_runner.go:130] > ID=ubuntu
	I0718 00:00:16.475267 1870087 command_runner.go:130] > ID_LIKE=debian
	I0718 00:00:16.475273 1870087 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0718 00:00:16.475279 1870087 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0718 00:00:16.475291 1870087 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0718 00:00:16.475299 1870087 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0718 00:00:16.475308 1870087 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0718 00:00:16.475367 1870087 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 00:00:16.475406 1870087 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 00:00:16.475423 1870087 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 00:00:16.475430 1870087 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0718 00:00:16.475445 1870087 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0718 00:00:16.475504 1870087 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0718 00:00:16.475612 1870087 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0718 00:00:16.475626 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> /etc/ssl/certs/18062262.pem
	I0718 00:00:16.475744 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 00:00:16.487437 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:00:16.516573 1870087 start.go:303] post-start completed in 169.069795ms
	I0718 00:00:16.516960 1870087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668
	I0718 00:00:16.535153 1870087 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/config.json ...
	I0718 00:00:16.535441 1870087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:00:16.535483 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:16.553374 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:16.644483 1870087 command_runner.go:130] > 10%!
	(MISSING)I0718 00:00:16.644559 1870087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 00:00:16.650425 1870087 command_runner.go:130] > 176G
	I0718 00:00:16.650460 1870087 start.go:128] duration metric: createHost completed in 10.469450357s
	I0718 00:00:16.650469 1870087 start.go:83] releasing machines lock for "multinode-451668", held for 10.469573071s
	I0718 00:00:16.650551 1870087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668
	I0718 00:00:16.668289 1870087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 00:00:16.668330 1870087 ssh_runner.go:195] Run: cat /version.json
	I0718 00:00:16.668361 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:16.668374 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:16.689404 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:16.699134 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:16.916313 1870087 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0718 00:00:16.916361 1870087 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0718 00:00:16.916488 1870087 ssh_runner.go:195] Run: systemctl --version
	I0718 00:00:16.921843 1870087 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0718 00:00:16.921929 1870087 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0718 00:00:16.922196 1870087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0718 00:00:17.069116 1870087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 00:00:17.074551 1870087 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0718 00:00:17.074623 1870087 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0718 00:00:17.074645 1870087 command_runner.go:130] > Device: 3ah/58d	Inode: 2078873     Links: 1
	I0718 00:00:17.074669 1870087 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 00:00:17.074705 1870087 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0718 00:00:17.074724 1870087 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0718 00:00:17.074753 1870087 command_runner.go:130] > Change: 2023-07-17 23:37:43.440241036 +0000
	I0718 00:00:17.074778 1870087 command_runner.go:130] >  Birth: 2023-07-17 23:37:43.440241036 +0000
	I0718 00:00:17.075052 1870087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:00:17.099216 1870087 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0718 00:00:17.099295 1870087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:00:17.139524 1870087 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0718 00:00:17.139574 1870087 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0718 00:00:17.139583 1870087 start.go:466] detecting cgroup driver to use...
	I0718 00:00:17.139616 1870087 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0718 00:00:17.139672 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 00:00:17.158814 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 00:00:17.173062 1870087 docker.go:196] disabling cri-docker service (if available) ...
	I0718 00:00:17.173126 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0718 00:00:17.189300 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0718 00:00:17.206965 1870087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0718 00:00:17.296719 1870087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0718 00:00:17.313721 1870087 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0718 00:00:17.403003 1870087 docker.go:212] disabling docker service ...
	I0718 00:00:17.403075 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0718 00:00:17.425896 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0718 00:00:17.441278 1870087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0718 00:00:17.545142 1870087 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0718 00:00:17.545228 1870087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0718 00:00:17.662220 1870087 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0718 00:00:17.662331 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0718 00:00:17.677594 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 00:00:17.697224 1870087 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0718 00:00:17.698708 1870087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0718 00:00:17.698790 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:00:17.711194 1870087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0718 00:00:17.711316 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:00:17.723549 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:00:17.735752 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:00:17.748930 1870087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 00:00:17.760240 1870087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 00:00:17.769674 1870087 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0718 00:00:17.770885 1870087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 00:00:17.781609 1870087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 00:00:17.873173 1870087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0718 00:00:17.998784 1870087 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0718 00:00:17.998935 1870087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0718 00:00:18.004532 1870087 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0718 00:00:18.004558 1870087 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0718 00:00:18.004571 1870087 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0718 00:00:18.004581 1870087 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 00:00:18.004587 1870087 command_runner.go:130] > Access: 2023-07-18 00:00:17.978652843 +0000
	I0718 00:00:18.004595 1870087 command_runner.go:130] > Modify: 2023-07-18 00:00:17.978652843 +0000
	I0718 00:00:18.004601 1870087 command_runner.go:130] > Change: 2023-07-18 00:00:17.978652843 +0000
	I0718 00:00:18.004607 1870087 command_runner.go:130] >  Birth: -
	I0718 00:00:18.004781 1870087 start.go:534] Will wait 60s for crictl version
	I0718 00:00:18.004846 1870087 ssh_runner.go:195] Run: which crictl
	I0718 00:00:18.009977 1870087 command_runner.go:130] > /usr/bin/crictl
	I0718 00:00:18.010094 1870087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 00:00:18.053288 1870087 command_runner.go:130] > Version:  0.1.0
	I0718 00:00:18.053606 1870087 command_runner.go:130] > RuntimeName:  cri-o
	I0718 00:00:18.053774 1870087 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0718 00:00:18.053962 1870087 command_runner.go:130] > RuntimeApiVersion:  v1
	I0718 00:00:18.056962 1870087 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0718 00:00:18.057082 1870087 ssh_runner.go:195] Run: crio --version
	I0718 00:00:18.106554 1870087 command_runner.go:130] > crio version 1.24.6
	I0718 00:00:18.106579 1870087 command_runner.go:130] > Version:          1.24.6
	I0718 00:00:18.106592 1870087 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0718 00:00:18.106631 1870087 command_runner.go:130] > GitTreeState:     clean
	I0718 00:00:18.106646 1870087 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0718 00:00:18.106652 1870087 command_runner.go:130] > GoVersion:        go1.18.2
	I0718 00:00:18.106658 1870087 command_runner.go:130] > Compiler:         gc
	I0718 00:00:18.106668 1870087 command_runner.go:130] > Platform:         linux/arm64
	I0718 00:00:18.106675 1870087 command_runner.go:130] > Linkmode:         dynamic
	I0718 00:00:18.106685 1870087 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0718 00:00:18.106707 1870087 command_runner.go:130] > SeccompEnabled:   true
	I0718 00:00:18.106727 1870087 command_runner.go:130] > AppArmorEnabled:  false
	I0718 00:00:18.109419 1870087 ssh_runner.go:195] Run: crio --version
	I0718 00:00:18.152807 1870087 command_runner.go:130] > crio version 1.24.6
	I0718 00:00:18.152827 1870087 command_runner.go:130] > Version:          1.24.6
	I0718 00:00:18.152836 1870087 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0718 00:00:18.152841 1870087 command_runner.go:130] > GitTreeState:     clean
	I0718 00:00:18.152848 1870087 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0718 00:00:18.152886 1870087 command_runner.go:130] > GoVersion:        go1.18.2
	I0718 00:00:18.152914 1870087 command_runner.go:130] > Compiler:         gc
	I0718 00:00:18.152924 1870087 command_runner.go:130] > Platform:         linux/arm64
	I0718 00:00:18.152930 1870087 command_runner.go:130] > Linkmode:         dynamic
	I0718 00:00:18.152954 1870087 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0718 00:00:18.152965 1870087 command_runner.go:130] > SeccompEnabled:   true
	I0718 00:00:18.152971 1870087 command_runner.go:130] > AppArmorEnabled:  false
	I0718 00:00:18.157945 1870087 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0718 00:00:18.160067 1870087 cli_runner.go:164] Run: docker network inspect multinode-451668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 00:00:18.177801 1870087 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0718 00:00:18.182482 1870087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 00:00:18.195753 1870087 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0718 00:00:18.195818 1870087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0718 00:00:18.258978 1870087 command_runner.go:130] > {
	I0718 00:00:18.258995 1870087 command_runner.go:130] >   "images": [
	I0718 00:00:18.259000 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259009 1870087 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0718 00:00:18.259015 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259022 1870087 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0718 00:00:18.259027 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259032 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259044 1870087 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0718 00:00:18.259054 1870087 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0718 00:00:18.259060 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259065 1870087 command_runner.go:130] >       "size": "60881430",
	I0718 00:00:18.259070 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.259075 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259081 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259086 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259090 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259098 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259106 1870087 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0718 00:00:18.259110 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259116 1870087 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0718 00:00:18.259121 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259126 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259135 1870087 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0718 00:00:18.259145 1870087 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0718 00:00:18.259149 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259158 1870087 command_runner.go:130] >       "size": "29037500",
	I0718 00:00:18.259163 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.259174 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259179 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259184 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259188 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259193 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259200 1870087 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0718 00:00:18.259205 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259211 1870087 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0718 00:00:18.259215 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259220 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259229 1870087 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0718 00:00:18.259239 1870087 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0718 00:00:18.259243 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259248 1870087 command_runner.go:130] >       "size": "51393451",
	I0718 00:00:18.259252 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.259257 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259262 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259268 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259276 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259280 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259287 1870087 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0718 00:00:18.259292 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259298 1870087 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0718 00:00:18.259302 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259307 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259316 1870087 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0718 00:00:18.259325 1870087 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0718 00:00:18.259333 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259339 1870087 command_runner.go:130] >       "size": "182283991",
	I0718 00:00:18.259343 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.259348 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.259352 1870087 command_runner.go:130] >       },
	I0718 00:00:18.259357 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259362 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259366 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259370 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259376 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259384 1870087 command_runner.go:130] >       "id": "39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473",
	I0718 00:00:18.259389 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259395 1870087 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0718 00:00:18.259399 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259404 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259413 1870087 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090",
	I0718 00:00:18.259422 1870087 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0718 00:00:18.259426 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259431 1870087 command_runner.go:130] >       "size": "116204496",
	I0718 00:00:18.259435 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.259440 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.259444 1870087 command_runner.go:130] >       },
	I0718 00:00:18.259449 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259453 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259458 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259462 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259466 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259475 1870087 command_runner.go:130] >       "id": "ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8",
	I0718 00:00:18.259480 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259486 1870087 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0718 00:00:18.259490 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259495 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259505 1870087 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0",
	I0718 00:00:18.259514 1870087 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"
	I0718 00:00:18.259518 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259524 1870087 command_runner.go:130] >       "size": "108667702",
	I0718 00:00:18.259528 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.259533 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.259537 1870087 command_runner.go:130] >       },
	I0718 00:00:18.259542 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259547 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259551 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259555 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259561 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259579 1870087 command_runner.go:130] >       "id": "fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a",
	I0718 00:00:18.259587 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259593 1870087 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0718 00:00:18.259597 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259602 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259611 1870087 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53",
	I0718 00:00:18.259620 1870087 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0718 00:00:18.259624 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259629 1870087 command_runner.go:130] >       "size": "68099991",
	I0718 00:00:18.259633 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.259638 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259642 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259647 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259651 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259655 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259662 1870087 command_runner.go:130] >       "id": "bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540",
	I0718 00:00:18.259667 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259673 1870087 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0718 00:00:18.259677 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259683 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259720 1870087 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf",
	I0718 00:00:18.259730 1870087 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0718 00:00:18.259734 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259740 1870087 command_runner.go:130] >       "size": "57615158",
	I0718 00:00:18.259744 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.259749 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.259753 1870087 command_runner.go:130] >       },
	I0718 00:00:18.259758 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259762 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259767 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259771 1870087 command_runner.go:130] >     },
	I0718 00:00:18.259775 1870087 command_runner.go:130] >     {
	I0718 00:00:18.259782 1870087 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0718 00:00:18.259787 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.259792 1870087 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0718 00:00:18.259796 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259801 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.259811 1870087 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0718 00:00:18.259821 1870087 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0718 00:00:18.259825 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.259830 1870087 command_runner.go:130] >       "size": "520014",
	I0718 00:00:18.259834 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.259839 1870087 command_runner.go:130] >         "value": "65535"
	I0718 00:00:18.259843 1870087 command_runner.go:130] >       },
	I0718 00:00:18.259847 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.259852 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.259858 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.259862 1870087 command_runner.go:130] >     }
	I0718 00:00:18.259866 1870087 command_runner.go:130] >   ]
	I0718 00:00:18.259870 1870087 command_runner.go:130] > }
	I0718 00:00:18.267573 1870087 crio.go:496] all images are preloaded for cri-o runtime.
	I0718 00:00:18.267594 1870087 crio.go:415] Images already preloaded, skipping extraction
	I0718 00:00:18.267660 1870087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0718 00:00:18.311360 1870087 command_runner.go:130] > {
	I0718 00:00:18.311381 1870087 command_runner.go:130] >   "images": [
	I0718 00:00:18.311386 1870087 command_runner.go:130] >     {
	I0718 00:00:18.311396 1870087 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0718 00:00:18.311412 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.311422 1870087 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0718 00:00:18.311430 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311435 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.311447 1870087 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0718 00:00:18.311463 1870087 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0718 00:00:18.311473 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311479 1870087 command_runner.go:130] >       "size": "60881430",
	I0718 00:00:18.311488 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.311493 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.311499 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.311506 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.311510 1870087 command_runner.go:130] >     },
	I0718 00:00:18.311515 1870087 command_runner.go:130] >     {
	I0718 00:00:18.311523 1870087 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0718 00:00:18.311531 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.311538 1870087 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0718 00:00:18.311545 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311551 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.311564 1870087 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0718 00:00:18.311583 1870087 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0718 00:00:18.311587 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311596 1870087 command_runner.go:130] >       "size": "29037500",
	I0718 00:00:18.311601 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.311606 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.311611 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.311618 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.311622 1870087 command_runner.go:130] >     },
	I0718 00:00:18.311627 1870087 command_runner.go:130] >     {
	I0718 00:00:18.311634 1870087 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0718 00:00:18.311643 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.311649 1870087 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0718 00:00:18.311657 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311662 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.311675 1870087 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0718 00:00:18.311688 1870087 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0718 00:00:18.311695 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311700 1870087 command_runner.go:130] >       "size": "51393451",
	I0718 00:00:18.311707 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.311713 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.311719 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.311724 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.311731 1870087 command_runner.go:130] >     },
	I0718 00:00:18.311736 1870087 command_runner.go:130] >     {
	I0718 00:00:18.311747 1870087 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0718 00:00:18.311755 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.311762 1870087 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0718 00:00:18.311769 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311774 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.311783 1870087 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0718 00:00:18.311796 1870087 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0718 00:00:18.311810 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311819 1870087 command_runner.go:130] >       "size": "182283991",
	I0718 00:00:18.311824 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.311832 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.311837 1870087 command_runner.go:130] >       },
	I0718 00:00:18.311847 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.311855 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.311860 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.311866 1870087 command_runner.go:130] >     },
	I0718 00:00:18.311870 1870087 command_runner.go:130] >     {
	I0718 00:00:18.311878 1870087 command_runner.go:130] >       "id": "39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473",
	I0718 00:00:18.311884 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.311895 1870087 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0718 00:00:18.311902 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311910 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.311919 1870087 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090",
	I0718 00:00:18.311931 1870087 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0718 00:00:18.311939 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.311944 1870087 command_runner.go:130] >       "size": "116204496",
	I0718 00:00:18.311948 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.311953 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.311958 1870087 command_runner.go:130] >       },
	I0718 00:00:18.311965 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.311976 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.311986 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.311990 1870087 command_runner.go:130] >     },
	I0718 00:00:18.311998 1870087 command_runner.go:130] >     {
	I0718 00:00:18.312006 1870087 command_runner.go:130] >       "id": "ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8",
	I0718 00:00:18.312015 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.312022 1870087 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0718 00:00:18.312027 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312033 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.312043 1870087 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0",
	I0718 00:00:18.312054 1870087 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"
	I0718 00:00:18.312062 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312068 1870087 command_runner.go:130] >       "size": "108667702",
	I0718 00:00:18.312075 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.312080 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.312087 1870087 command_runner.go:130] >       },
	I0718 00:00:18.312093 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.312101 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.312108 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.312112 1870087 command_runner.go:130] >     },
	I0718 00:00:18.312118 1870087 command_runner.go:130] >     {
	I0718 00:00:18.312126 1870087 command_runner.go:130] >       "id": "fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a",
	I0718 00:00:18.312131 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.312139 1870087 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0718 00:00:18.312146 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312151 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.312164 1870087 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53",
	I0718 00:00:18.312176 1870087 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0718 00:00:18.312184 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312189 1870087 command_runner.go:130] >       "size": "68099991",
	I0718 00:00:18.312198 1870087 command_runner.go:130] >       "uid": null,
	I0718 00:00:18.312203 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.312208 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.312212 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.312218 1870087 command_runner.go:130] >     },
	I0718 00:00:18.312223 1870087 command_runner.go:130] >     {
	I0718 00:00:18.312237 1870087 command_runner.go:130] >       "id": "bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540",
	I0718 00:00:18.312245 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.312252 1870087 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0718 00:00:18.312259 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312265 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.312607 1870087 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf",
	I0718 00:00:18.312628 1870087 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0718 00:00:18.312633 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312649 1870087 command_runner.go:130] >       "size": "57615158",
	I0718 00:00:18.312658 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.312664 1870087 command_runner.go:130] >         "value": "0"
	I0718 00:00:18.312672 1870087 command_runner.go:130] >       },
	I0718 00:00:18.312678 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.312683 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.312695 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.312699 1870087 command_runner.go:130] >     },
	I0718 00:00:18.312706 1870087 command_runner.go:130] >     {
	I0718 00:00:18.312714 1870087 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0718 00:00:18.312730 1870087 command_runner.go:130] >       "repoTags": [
	I0718 00:00:18.312739 1870087 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0718 00:00:18.312748 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312753 1870087 command_runner.go:130] >       "repoDigests": [
	I0718 00:00:18.312769 1870087 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0718 00:00:18.312779 1870087 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0718 00:00:18.312785 1870087 command_runner.go:130] >       ],
	I0718 00:00:18.312795 1870087 command_runner.go:130] >       "size": "520014",
	I0718 00:00:18.312802 1870087 command_runner.go:130] >       "uid": {
	I0718 00:00:18.312808 1870087 command_runner.go:130] >         "value": "65535"
	I0718 00:00:18.312815 1870087 command_runner.go:130] >       },
	I0718 00:00:18.312821 1870087 command_runner.go:130] >       "username": "",
	I0718 00:00:18.312829 1870087 command_runner.go:130] >       "spec": null,
	I0718 00:00:18.312837 1870087 command_runner.go:130] >       "pinned": false
	I0718 00:00:18.312843 1870087 command_runner.go:130] >     }
	I0718 00:00:18.312850 1870087 command_runner.go:130] >   ]
	I0718 00:00:18.312854 1870087 command_runner.go:130] > }
	I0718 00:00:18.320003 1870087 crio.go:496] all images are preloaded for cri-o runtime.
	I0718 00:00:18.320028 1870087 cache_images.go:84] Images are preloaded, skipping loading
	I0718 00:00:18.320112 1870087 ssh_runner.go:195] Run: crio config
	I0718 00:00:18.373003 1870087 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0718 00:00:18.373068 1870087 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0718 00:00:18.373091 1870087 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0718 00:00:18.373112 1870087 command_runner.go:130] > #
	I0718 00:00:18.373149 1870087 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0718 00:00:18.373179 1870087 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0718 00:00:18.373201 1870087 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0718 00:00:18.373233 1870087 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0718 00:00:18.373261 1870087 command_runner.go:130] > # reload'.
	I0718 00:00:18.373289 1870087 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0718 00:00:18.373311 1870087 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0718 00:00:18.373333 1870087 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0718 00:00:18.373365 1870087 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0718 00:00:18.373385 1870087 command_runner.go:130] > [crio]
	I0718 00:00:18.373406 1870087 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0718 00:00:18.373428 1870087 command_runner.go:130] > # containers images, in this directory.
	I0718 00:00:18.373462 1870087 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0718 00:00:18.373486 1870087 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0718 00:00:18.373657 1870087 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0718 00:00:18.373691 1870087 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0718 00:00:18.373714 1870087 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0718 00:00:18.373734 1870087 command_runner.go:130] > # storage_driver = "vfs"
	I0718 00:00:18.373768 1870087 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0718 00:00:18.373790 1870087 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0718 00:00:18.373810 1870087 command_runner.go:130] > # storage_option = [
	I0718 00:00:18.373829 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.373851 1870087 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0718 00:00:18.373886 1870087 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0718 00:00:18.373905 1870087 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0718 00:00:18.373927 1870087 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0718 00:00:18.373959 1870087 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0718 00:00:18.373980 1870087 command_runner.go:130] > # always happen on a node reboot
	I0718 00:00:18.374000 1870087 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0718 00:00:18.374022 1870087 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0718 00:00:18.374042 1870087 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0718 00:00:18.374079 1870087 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0718 00:00:18.374099 1870087 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0718 00:00:18.374121 1870087 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0718 00:00:18.374163 1870087 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0718 00:00:18.374492 1870087 command_runner.go:130] > # internal_wipe = true
	I0718 00:00:18.374531 1870087 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0718 00:00:18.374551 1870087 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0718 00:00:18.374572 1870087 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0718 00:00:18.374649 1870087 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0718 00:00:18.374687 1870087 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0718 00:00:18.374704 1870087 command_runner.go:130] > [crio.api]
	I0718 00:00:18.374723 1870087 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0718 00:00:18.374754 1870087 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0718 00:00:18.374776 1870087 command_runner.go:130] > # IP address on which the stream server will listen.
	I0718 00:00:18.374797 1870087 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0718 00:00:18.374820 1870087 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0718 00:00:18.374852 1870087 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0718 00:00:18.375042 1870087 command_runner.go:130] > # stream_port = "0"
	I0718 00:00:18.375078 1870087 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0718 00:00:18.375098 1870087 command_runner.go:130] > # stream_enable_tls = false
	I0718 00:00:18.375120 1870087 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0718 00:00:18.375199 1870087 command_runner.go:130] > # stream_idle_timeout = ""
	I0718 00:00:18.375228 1870087 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0718 00:00:18.375246 1870087 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0718 00:00:18.375264 1870087 command_runner.go:130] > # minutes.
	I0718 00:00:18.375284 1870087 command_runner.go:130] > # stream_tls_cert = ""
	I0718 00:00:18.375313 1870087 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0718 00:00:18.375341 1870087 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0718 00:00:18.375361 1870087 command_runner.go:130] > # stream_tls_key = ""
	I0718 00:00:18.375383 1870087 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0718 00:00:18.375416 1870087 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0718 00:00:18.375438 1870087 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0718 00:00:18.375627 1870087 command_runner.go:130] > # stream_tls_ca = ""
	I0718 00:00:18.375664 1870087 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0718 00:00:18.375828 1870087 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0718 00:00:18.375863 1870087 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0718 00:00:18.375882 1870087 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0718 00:00:18.375936 1870087 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0718 00:00:18.375961 1870087 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0718 00:00:18.375979 1870087 command_runner.go:130] > [crio.runtime]
	I0718 00:00:18.376001 1870087 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0718 00:00:18.376022 1870087 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0718 00:00:18.376053 1870087 command_runner.go:130] > # "nofile=1024:2048"
	I0718 00:00:18.376074 1870087 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0718 00:00:18.376094 1870087 command_runner.go:130] > # default_ulimits = [
	I0718 00:00:18.376112 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.376141 1870087 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0718 00:00:18.376404 1870087 command_runner.go:130] > # no_pivot = false
	I0718 00:00:18.376440 1870087 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0718 00:00:18.376462 1870087 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0718 00:00:18.376483 1870087 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0718 00:00:18.376523 1870087 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0718 00:00:18.376550 1870087 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0718 00:00:18.376574 1870087 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0718 00:00:18.376752 1870087 command_runner.go:130] > # conmon = ""
	I0718 00:00:18.376790 1870087 command_runner.go:130] > # Cgroup setting for conmon
	I0718 00:00:18.376830 1870087 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0718 00:00:18.376868 1870087 command_runner.go:130] > conmon_cgroup = "pod"
	I0718 00:00:18.376893 1870087 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0718 00:00:18.376914 1870087 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0718 00:00:18.376936 1870087 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0718 00:00:18.377121 1870087 command_runner.go:130] > # conmon_env = [
	I0718 00:00:18.377212 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.377252 1870087 command_runner.go:130] > # Additional environment variables to set for all the
	I0718 00:00:18.377271 1870087 command_runner.go:130] > # containers. These are overridden if set in the
	I0718 00:00:18.377292 1870087 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0718 00:00:18.377311 1870087 command_runner.go:130] > # default_env = [
	I0718 00:00:18.377336 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.377361 1870087 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0718 00:00:18.377381 1870087 command_runner.go:130] > # selinux = false
	I0718 00:00:18.377402 1870087 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0718 00:00:18.377435 1870087 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0718 00:00:18.377457 1870087 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0718 00:00:18.377538 1870087 command_runner.go:130] > # seccomp_profile = ""
	I0718 00:00:18.377573 1870087 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0718 00:00:18.377595 1870087 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0718 00:00:18.377625 1870087 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0718 00:00:18.377658 1870087 command_runner.go:130] > # which might increase security.
	I0718 00:00:18.377677 1870087 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0718 00:00:18.377698 1870087 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0718 00:00:18.377731 1870087 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0718 00:00:18.377755 1870087 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0718 00:00:18.377776 1870087 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0718 00:00:18.377798 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:00:18.378087 1870087 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0718 00:00:18.378131 1870087 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0718 00:00:18.378151 1870087 command_runner.go:130] > # the cgroup blockio controller.
	I0718 00:00:18.378170 1870087 command_runner.go:130] > # blockio_config_file = ""
	I0718 00:00:18.378207 1870087 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0718 00:00:18.378229 1870087 command_runner.go:130] > # irqbalance daemon.
	I0718 00:00:18.378250 1870087 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0718 00:00:18.378272 1870087 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0718 00:00:18.378303 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:00:18.378323 1870087 command_runner.go:130] > # rdt_config_file = ""
	I0718 00:00:18.378345 1870087 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0718 00:00:18.378632 1870087 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0718 00:00:18.378671 1870087 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0718 00:00:18.378690 1870087 command_runner.go:130] > # separate_pull_cgroup = ""
	I0718 00:00:18.378713 1870087 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0718 00:00:18.378749 1870087 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0718 00:00:18.378771 1870087 command_runner.go:130] > # will be added.
	I0718 00:00:18.378791 1870087 command_runner.go:130] > # default_capabilities = [
	I0718 00:00:18.379097 1870087 command_runner.go:130] > # 	"CHOWN",
	I0718 00:00:18.379132 1870087 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0718 00:00:18.379150 1870087 command_runner.go:130] > # 	"FSETID",
	I0718 00:00:18.379169 1870087 command_runner.go:130] > # 	"FOWNER",
	I0718 00:00:18.379189 1870087 command_runner.go:130] > # 	"SETGID",
	I0718 00:00:18.379220 1870087 command_runner.go:130] > # 	"SETUID",
	I0718 00:00:18.379244 1870087 command_runner.go:130] > # 	"SETPCAP",
	I0718 00:00:18.379527 1870087 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0718 00:00:18.379579 1870087 command_runner.go:130] > # 	"KILL",
	I0718 00:00:18.379600 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.379644 1870087 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0718 00:00:18.379671 1870087 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0718 00:00:18.380101 1870087 command_runner.go:130] > # add_inheritable_capabilities = true
	I0718 00:00:18.380148 1870087 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0718 00:00:18.380171 1870087 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0718 00:00:18.380242 1870087 command_runner.go:130] > # default_sysctls = [
	I0718 00:00:18.380268 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.380291 1870087 command_runner.go:130] > # List of devices on the host that a
	I0718 00:00:18.380315 1870087 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0718 00:00:18.380348 1870087 command_runner.go:130] > # allowed_devices = [
	I0718 00:00:18.380370 1870087 command_runner.go:130] > # 	"/dev/fuse",
	I0718 00:00:18.380388 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.380409 1870087 command_runner.go:130] > # List of additional devices. specified as
	I0718 00:00:18.380475 1870087 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0718 00:00:18.380504 1870087 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0718 00:00:18.380527 1870087 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0718 00:00:18.380862 1870087 command_runner.go:130] > # additional_devices = [
	I0718 00:00:18.380898 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.380919 1870087 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0718 00:00:18.380941 1870087 command_runner.go:130] > # cdi_spec_dirs = [
	I0718 00:00:18.380974 1870087 command_runner.go:130] > # 	"/etc/cdi",
	I0718 00:00:18.380996 1870087 command_runner.go:130] > # 	"/var/run/cdi",
	I0718 00:00:18.381015 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.381038 1870087 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0718 00:00:18.381072 1870087 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0718 00:00:18.381093 1870087 command_runner.go:130] > # Defaults to false.
	I0718 00:00:18.381343 1870087 command_runner.go:130] > # device_ownership_from_security_context = false
	I0718 00:00:18.381385 1870087 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0718 00:00:18.381408 1870087 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0718 00:00:18.381428 1870087 command_runner.go:130] > # hooks_dir = [
	I0718 00:00:18.381687 1870087 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0718 00:00:18.381720 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.381741 1870087 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0718 00:00:18.381763 1870087 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0718 00:00:18.381796 1870087 command_runner.go:130] > # its default mounts from the following two files:
	I0718 00:00:18.381816 1870087 command_runner.go:130] > #
	I0718 00:00:18.381838 1870087 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0718 00:00:18.381860 1870087 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0718 00:00:18.381902 1870087 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0718 00:00:18.381927 1870087 command_runner.go:130] > #
	I0718 00:00:18.381950 1870087 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0718 00:00:18.381971 1870087 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0718 00:00:18.382004 1870087 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0718 00:00:18.382026 1870087 command_runner.go:130] > #      only add mounts it finds in this file.
	I0718 00:00:18.382042 1870087 command_runner.go:130] > #
	I0718 00:00:18.382061 1870087 command_runner.go:130] > # default_mounts_file = ""
	I0718 00:00:18.382082 1870087 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0718 00:00:18.382115 1870087 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0718 00:00:18.382141 1870087 command_runner.go:130] > # pids_limit = 0
	I0718 00:00:18.382163 1870087 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0718 00:00:18.382185 1870087 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0718 00:00:18.382218 1870087 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0718 00:00:18.382245 1870087 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0718 00:00:18.382537 1870087 command_runner.go:130] > # log_size_max = -1
	I0718 00:00:18.382575 1870087 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0718 00:00:18.382601 1870087 command_runner.go:130] > # log_to_journald = false
	I0718 00:00:18.382623 1870087 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0718 00:00:18.382876 1870087 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0718 00:00:18.382910 1870087 command_runner.go:130] > # Path to directory for container attach sockets.
	I0718 00:00:18.382929 1870087 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0718 00:00:18.382951 1870087 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0718 00:00:18.383239 1870087 command_runner.go:130] > # bind_mount_prefix = ""
	I0718 00:00:18.383275 1870087 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0718 00:00:18.383293 1870087 command_runner.go:130] > # read_only = false
	I0718 00:00:18.383322 1870087 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0718 00:00:18.383355 1870087 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0718 00:00:18.383380 1870087 command_runner.go:130] > # live configuration reload.
	I0718 00:00:18.383399 1870087 command_runner.go:130] > # log_level = "info"
	I0718 00:00:18.383420 1870087 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0718 00:00:18.383441 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:00:18.383469 1870087 command_runner.go:130] > # log_filter = ""
	I0718 00:00:18.383495 1870087 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0718 00:00:18.383518 1870087 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0718 00:00:18.383538 1870087 command_runner.go:130] > # separated by comma.
	I0718 00:00:18.383556 1870087 command_runner.go:130] > # uid_mappings = ""
	I0718 00:00:18.383601 1870087 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0718 00:00:18.383623 1870087 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0718 00:00:18.383642 1870087 command_runner.go:130] > # separated by comma.
	I0718 00:00:18.383914 1870087 command_runner.go:130] > # gid_mappings = ""
	I0718 00:00:18.383961 1870087 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0718 00:00:18.383984 1870087 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0718 00:00:18.384008 1870087 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0718 00:00:18.384042 1870087 command_runner.go:130] > # minimum_mappable_uid = -1
	I0718 00:00:18.384071 1870087 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0718 00:00:18.384096 1870087 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0718 00:00:18.384119 1870087 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0718 00:00:18.384153 1870087 command_runner.go:130] > # minimum_mappable_gid = -1
	I0718 00:00:18.384184 1870087 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0718 00:00:18.384216 1870087 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0718 00:00:18.384238 1870087 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0718 00:00:18.384525 1870087 command_runner.go:130] > # ctr_stop_timeout = 30
	I0718 00:00:18.384561 1870087 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0718 00:00:18.384582 1870087 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0718 00:00:18.384611 1870087 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0718 00:00:18.384644 1870087 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0718 00:00:18.384673 1870087 command_runner.go:130] > # drop_infra_ctr = true
	I0718 00:00:18.384696 1870087 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0718 00:00:18.384718 1870087 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0718 00:00:18.384755 1870087 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0718 00:00:18.384776 1870087 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0718 00:00:18.384797 1870087 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0718 00:00:18.384819 1870087 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0718 00:00:18.384839 1870087 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0718 00:00:18.384874 1870087 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0718 00:00:18.385123 1870087 command_runner.go:130] > # pinns_path = ""
	I0718 00:00:18.385159 1870087 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0718 00:00:18.385179 1870087 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0718 00:00:18.385202 1870087 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0718 00:00:18.385482 1870087 command_runner.go:130] > # default_runtime = "runc"
	I0718 00:00:18.385516 1870087 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0718 00:00:18.385549 1870087 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0718 00:00:18.385574 1870087 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0718 00:00:18.385614 1870087 command_runner.go:130] > # creation as a file is not desired either.
	I0718 00:00:18.385644 1870087 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0718 00:00:18.385664 1870087 command_runner.go:130] > # the hostname is being managed dynamically.
	I0718 00:00:18.385683 1870087 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0718 00:00:18.385701 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.385735 1870087 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0718 00:00:18.385758 1870087 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0718 00:00:18.385784 1870087 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0718 00:00:18.385815 1870087 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0718 00:00:18.385842 1870087 command_runner.go:130] > #
	I0718 00:00:18.385863 1870087 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0718 00:00:18.385883 1870087 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0718 00:00:18.385902 1870087 command_runner.go:130] > #  runtime_type = "oci"
	I0718 00:00:18.385930 1870087 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0718 00:00:18.385954 1870087 command_runner.go:130] > #  privileged_without_host_devices = false
	I0718 00:00:18.385974 1870087 command_runner.go:130] > #  allowed_annotations = []
	I0718 00:00:18.385992 1870087 command_runner.go:130] > # Where:
	I0718 00:00:18.386013 1870087 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0718 00:00:18.386048 1870087 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0718 00:00:18.386074 1870087 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0718 00:00:18.386096 1870087 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0718 00:00:18.386115 1870087 command_runner.go:130] > #   in $PATH.
	I0718 00:00:18.386150 1870087 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0718 00:00:18.386172 1870087 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0718 00:00:18.386193 1870087 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0718 00:00:18.386213 1870087 command_runner.go:130] > #   state.
	I0718 00:00:18.386246 1870087 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0718 00:00:18.386270 1870087 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0718 00:00:18.386291 1870087 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0718 00:00:18.386311 1870087 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0718 00:00:18.386342 1870087 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0718 00:00:18.386367 1870087 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0718 00:00:18.386386 1870087 command_runner.go:130] > #   The currently recognized values are:
	I0718 00:00:18.386428 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0718 00:00:18.386456 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0718 00:00:18.386478 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0718 00:00:18.386506 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0718 00:00:18.386539 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0718 00:00:18.386568 1870087 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0718 00:00:18.386590 1870087 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0718 00:00:18.386613 1870087 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0718 00:00:18.386643 1870087 command_runner.go:130] > #   should be moved to the container's cgroup
	I0718 00:00:18.386666 1870087 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0718 00:00:18.386945 1870087 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0718 00:00:18.386981 1870087 command_runner.go:130] > runtime_type = "oci"
	I0718 00:00:18.386999 1870087 command_runner.go:130] > runtime_root = "/run/runc"
	I0718 00:00:18.387019 1870087 command_runner.go:130] > runtime_config_path = ""
	I0718 00:00:18.387040 1870087 command_runner.go:130] > monitor_path = ""
	I0718 00:00:18.387075 1870087 command_runner.go:130] > monitor_cgroup = ""
	I0718 00:00:18.387097 1870087 command_runner.go:130] > monitor_exec_cgroup = ""
	I0718 00:00:18.387172 1870087 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0718 00:00:18.387200 1870087 command_runner.go:130] > # running containers
	I0718 00:00:18.387221 1870087 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0718 00:00:18.387244 1870087 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0718 00:00:18.387285 1870087 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0718 00:00:18.387311 1870087 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0718 00:00:18.387332 1870087 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0718 00:00:18.387353 1870087 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0718 00:00:18.387386 1870087 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0718 00:00:18.388254 1870087 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0718 00:00:18.388280 1870087 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0718 00:00:18.388317 1870087 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0718 00:00:18.388484 1870087 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0718 00:00:18.388508 1870087 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0718 00:00:18.388545 1870087 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0718 00:00:18.388647 1870087 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0718 00:00:18.388672 1870087 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0718 00:00:18.388693 1870087 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0718 00:00:18.388829 1870087 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0718 00:00:18.388854 1870087 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0718 00:00:18.388889 1870087 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0718 00:00:18.388993 1870087 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0718 00:00:18.389030 1870087 command_runner.go:130] > # Example:
	I0718 00:00:18.389063 1870087 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0718 00:00:18.389086 1870087 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0718 00:00:18.389156 1870087 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0718 00:00:18.389325 1870087 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0718 00:00:18.389398 1870087 command_runner.go:130] > # cpuset = 0
	I0718 00:00:18.389522 1870087 command_runner.go:130] > # cpushares = "0-1"
	I0718 00:00:18.389543 1870087 command_runner.go:130] > # Where:
	I0718 00:00:18.389591 1870087 command_runner.go:130] > # The workload name is workload-type.
	I0718 00:00:18.389800 1870087 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0718 00:00:18.389840 1870087 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0718 00:00:18.389909 1870087 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0718 00:00:18.389934 1870087 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0718 00:00:18.389957 1870087 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0718 00:00:18.390183 1870087 command_runner.go:130] > # 
	I0718 00:00:18.390309 1870087 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0718 00:00:18.390331 1870087 command_runner.go:130] > #
	I0718 00:00:18.390374 1870087 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0718 00:00:18.390522 1870087 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0718 00:00:18.390547 1870087 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0718 00:00:18.390583 1870087 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0718 00:00:18.390665 1870087 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0718 00:00:18.390685 1870087 command_runner.go:130] > [crio.image]
	I0718 00:00:18.390709 1870087 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0718 00:00:18.390744 1870087 command_runner.go:130] > # default_transport = "docker://"
	I0718 00:00:18.390770 1870087 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0718 00:00:18.390796 1870087 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0718 00:00:18.391008 1870087 command_runner.go:130] > # global_auth_file = ""
	I0718 00:00:18.391032 1870087 command_runner.go:130] > # The image used to instantiate infra containers.
	I0718 00:00:18.391053 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:00:18.391191 1870087 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0718 00:00:18.391216 1870087 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0718 00:00:18.391250 1870087 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0718 00:00:18.391376 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:00:18.391397 1870087 command_runner.go:130] > # pause_image_auth_file = ""
	I0718 00:00:18.391418 1870087 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0718 00:00:18.391453 1870087 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0718 00:00:18.391475 1870087 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0718 00:00:18.391498 1870087 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0718 00:00:18.391528 1870087 command_runner.go:130] > # pause_command = "/pause"
	I0718 00:00:18.391552 1870087 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0718 00:00:18.391583 1870087 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0718 00:00:18.391836 1870087 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0718 00:00:18.391861 1870087 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0718 00:00:18.391894 1870087 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0718 00:00:18.392014 1870087 command_runner.go:130] > # signature_policy = ""
	I0718 00:00:18.392039 1870087 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0718 00:00:18.392062 1870087 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0718 00:00:18.392191 1870087 command_runner.go:130] > # changing them here.
	I0718 00:00:18.392277 1870087 command_runner.go:130] > # insecure_registries = [
	I0718 00:00:18.392364 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.392391 1870087 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0718 00:00:18.392412 1870087 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0718 00:00:18.392545 1870087 command_runner.go:130] > # image_volumes = "mkdir"
	I0718 00:00:18.392568 1870087 command_runner.go:130] > # Temporary directory to use for storing big files
	I0718 00:00:18.392659 1870087 command_runner.go:130] > # big_files_temporary_dir = ""
	I0718 00:00:18.392818 1870087 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0718 00:00:18.392839 1870087 command_runner.go:130] > # CNI plugins.
	I0718 00:00:18.392858 1870087 command_runner.go:130] > [crio.network]
	I0718 00:00:18.393008 1870087 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0718 00:00:18.393032 1870087 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0718 00:00:18.393121 1870087 command_runner.go:130] > # cni_default_network = ""
	I0718 00:00:18.393257 1870087 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0718 00:00:18.393615 1870087 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0718 00:00:18.393662 1870087 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0718 00:00:18.393900 1870087 command_runner.go:130] > # plugin_dirs = [
	I0718 00:00:18.394077 1870087 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0718 00:00:18.394485 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.394553 1870087 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0718 00:00:18.394574 1870087 command_runner.go:130] > [crio.metrics]
	I0718 00:00:18.394598 1870087 command_runner.go:130] > # Globally enable or disable metrics support.
	I0718 00:00:18.394700 1870087 command_runner.go:130] > # enable_metrics = false
	I0718 00:00:18.394729 1870087 command_runner.go:130] > # Specify enabled metrics collectors.
	I0718 00:00:18.394746 1870087 command_runner.go:130] > # Per default all metrics are enabled.
	I0718 00:00:18.394770 1870087 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0718 00:00:18.394808 1870087 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0718 00:00:18.394833 1870087 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0718 00:00:18.395044 1870087 command_runner.go:130] > # metrics_collectors = [
	I0718 00:00:18.395293 1870087 command_runner.go:130] > # 	"operations",
	I0718 00:00:18.395326 1870087 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0718 00:00:18.395345 1870087 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0718 00:00:18.395437 1870087 command_runner.go:130] > # 	"operations_errors",
	I0718 00:00:18.395464 1870087 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0718 00:00:18.395485 1870087 command_runner.go:130] > # 	"image_pulls_by_name",
	I0718 00:00:18.395744 1870087 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0718 00:00:18.395957 1870087 command_runner.go:130] > # 	"image_pulls_failures",
	I0718 00:00:18.395973 1870087 command_runner.go:130] > # 	"image_pulls_successes",
	I0718 00:00:18.396171 1870087 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0718 00:00:18.396281 1870087 command_runner.go:130] > # 	"image_layer_reuse",
	I0718 00:00:18.396308 1870087 command_runner.go:130] > # 	"containers_oom_total",
	I0718 00:00:18.396327 1870087 command_runner.go:130] > # 	"containers_oom",
	I0718 00:00:18.396354 1870087 command_runner.go:130] > # 	"processes_defunct",
	I0718 00:00:18.396698 1870087 command_runner.go:130] > # 	"operations_total",
	I0718 00:00:18.396710 1870087 command_runner.go:130] > # 	"operations_latency_seconds",
	I0718 00:00:18.396717 1870087 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0718 00:00:18.396723 1870087 command_runner.go:130] > # 	"operations_errors_total",
	I0718 00:00:18.396768 1870087 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0718 00:00:18.396776 1870087 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0718 00:00:18.397056 1870087 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0718 00:00:18.397068 1870087 command_runner.go:130] > # 	"image_pulls_success_total",
	I0718 00:00:18.397074 1870087 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0718 00:00:18.397082 1870087 command_runner.go:130] > # 	"containers_oom_count_total",
	I0718 00:00:18.397087 1870087 command_runner.go:130] > # ]
	I0718 00:00:18.397101 1870087 command_runner.go:130] > # The port on which the metrics server will listen.
	I0718 00:00:18.397384 1870087 command_runner.go:130] > # metrics_port = 9090
	I0718 00:00:18.397419 1870087 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0718 00:00:18.397437 1870087 command_runner.go:130] > # metrics_socket = ""
	I0718 00:00:18.397458 1870087 command_runner.go:130] > # The certificate for the secure metrics server.
	I0718 00:00:18.397479 1870087 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0718 00:00:18.397515 1870087 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0718 00:00:18.397536 1870087 command_runner.go:130] > # certificate on any modification event.
	I0718 00:00:18.397557 1870087 command_runner.go:130] > # metrics_cert = ""
	I0718 00:00:18.397589 1870087 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0718 00:00:18.397612 1870087 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0718 00:00:18.397630 1870087 command_runner.go:130] > # metrics_key = ""
	I0718 00:00:18.397652 1870087 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0718 00:00:18.397672 1870087 command_runner.go:130] > [crio.tracing]
	I0718 00:00:18.397706 1870087 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0718 00:00:18.397724 1870087 command_runner.go:130] > # enable_tracing = false
	I0718 00:00:18.397745 1870087 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0718 00:00:18.397981 1870087 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0718 00:00:18.398011 1870087 command_runner.go:130] > # Number of samples to collect per million spans.
	I0718 00:00:18.398292 1870087 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0718 00:00:18.398329 1870087 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0718 00:00:18.398347 1870087 command_runner.go:130] > [crio.stats]
	I0718 00:00:18.398370 1870087 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0718 00:00:18.398421 1870087 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0718 00:00:18.398447 1870087 command_runner.go:130] > # stats_collection_period = 0
	I0718 00:00:18.400119 1870087 command_runner.go:130] ! time="2023-07-18 00:00:18.370384575Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0718 00:00:18.400144 1870087 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0718 00:00:18.400231 1870087 cni.go:84] Creating CNI manager for ""
	I0718 00:00:18.400239 1870087 cni.go:137] 1 nodes found, recommending kindnet
	I0718 00:00:18.400250 1870087 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0718 00:00:18.400270 1870087 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-451668 NodeName:multinode-451668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 00:00:18.400417 1870087 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-451668"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 00:00:18.400486 1870087 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-451668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0718 00:00:18.400552 1870087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0718 00:00:18.410464 1870087 command_runner.go:130] > kubeadm
	I0718 00:00:18.410482 1870087 command_runner.go:130] > kubectl
	I0718 00:00:18.410487 1870087 command_runner.go:130] > kubelet
	I0718 00:00:18.411833 1870087 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 00:00:18.411920 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 00:00:18.422627 1870087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0718 00:00:18.444285 1870087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 00:00:18.466755 1870087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0718 00:00:18.488919 1870087 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0718 00:00:18.493513 1870087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 00:00:18.507136 1870087 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668 for IP: 192.168.58.2
	I0718 00:00:18.507168 1870087 certs.go:190] acquiring lock for shared ca certs: {Name:mkb76b85951e1a7e4a78939a9bc1392aa19273b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:18.507364 1870087 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key
	I0718 00:00:18.507427 1870087 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key
	I0718 00:00:18.507486 1870087 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key
	I0718 00:00:18.507518 1870087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt with IP's: []
	I0718 00:00:18.728396 1870087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt ...
	I0718 00:00:18.728425 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt: {Name:mkd457ffafb91fb02b310a3a5ff8c6b6d34f394f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:18.728617 1870087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key ...
	I0718 00:00:18.728629 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key: {Name:mk83b017bfb30b19bf3f298bcb4b77572f03fe6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:18.728715 1870087 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key.cee25041
	I0718 00:00:18.728730 1870087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0718 00:00:19.324139 1870087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt.cee25041 ...
	I0718 00:00:19.324171 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt.cee25041: {Name:mk000a9b43bc91c35b2a6571d0fcb13570bd6967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:19.324367 1870087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key.cee25041 ...
	I0718 00:00:19.324380 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key.cee25041: {Name:mke0c26b479b9721c2065c5f7ca72e12dae980a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:19.324464 1870087 certs.go:337] copying /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt
	I0718 00:00:19.324556 1870087 certs.go:341] copying /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key
	I0718 00:00:19.324615 1870087 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.key
	I0718 00:00:19.324632 1870087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.crt with IP's: []
	I0718 00:00:19.722943 1870087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.crt ...
	I0718 00:00:19.722974 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.crt: {Name:mk7a5866721a44d92b4874f396844248640a4395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:19.723161 1870087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.key ...
	I0718 00:00:19.723172 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.key: {Name:mk53ce24f3284d5ed0328c00f3d4e462f3454844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:19.723250 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 00:00:19.723273 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 00:00:19.723286 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 00:00:19.723300 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 00:00:19.723311 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 00:00:19.723327 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 00:00:19.723342 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 00:00:19.723360 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 00:00:19.723421 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem (1338 bytes)
	W0718 00:00:19.723468 1870087 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226_empty.pem, impossibly tiny 0 bytes
	I0718 00:00:19.723482 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 00:00:19.723511 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem (1082 bytes)
	I0718 00:00:19.723539 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem (1123 bytes)
	I0718 00:00:19.723580 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem (1675 bytes)
	I0718 00:00:19.723633 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:00:19.723664 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> /usr/share/ca-certificates/18062262.pem
	I0718 00:00:19.723684 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:00:19.723696 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem -> /usr/share/ca-certificates/1806226.pem
	I0718 00:00:19.724283 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0718 00:00:19.753345 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 00:00:19.781564 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 00:00:19.811044 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 00:00:19.840473 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 00:00:19.870376 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0718 00:00:19.898926 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 00:00:19.928746 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0718 00:00:19.960497 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /usr/share/ca-certificates/18062262.pem (1708 bytes)
	I0718 00:00:19.990315 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 00:00:20.023423 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem --> /usr/share/ca-certificates/1806226.pem (1338 bytes)
	I0718 00:00:20.055335 1870087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 00:00:20.078843 1870087 ssh_runner.go:195] Run: openssl version
	I0718 00:00:20.086949 1870087 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0718 00:00:20.087108 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18062262.pem && ln -fs /usr/share/ca-certificates/18062262.pem /etc/ssl/certs/18062262.pem"
	I0718 00:00:20.101214 1870087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18062262.pem
	I0718 00:00:20.106316 1870087 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 23:44 /usr/share/ca-certificates/18062262.pem
	I0718 00:00:20.106352 1870087 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 23:44 /usr/share/ca-certificates/18062262.pem
	I0718 00:00:20.106428 1870087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18062262.pem
	I0718 00:00:20.115259 1870087 command_runner.go:130] > 3ec20f2e
	I0718 00:00:20.115736 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18062262.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 00:00:20.128544 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 00:00:20.140620 1870087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:00:20.145403 1870087 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:00:20.145444 1870087 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:00:20.145525 1870087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:00:20.154399 1870087 command_runner.go:130] > b5213941
	I0718 00:00:20.154991 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 00:00:20.167688 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806226.pem && ln -fs /usr/share/ca-certificates/1806226.pem /etc/ssl/certs/1806226.pem"
	I0718 00:00:20.180773 1870087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806226.pem
	I0718 00:00:20.185942 1870087 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 23:44 /usr/share/ca-certificates/1806226.pem
	I0718 00:00:20.185988 1870087 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 23:44 /usr/share/ca-certificates/1806226.pem
	I0718 00:00:20.186051 1870087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806226.pem
	I0718 00:00:20.194650 1870087 command_runner.go:130] > 51391683
	I0718 00:00:20.194793 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806226.pem /etc/ssl/certs/51391683.0"
	I0718 00:00:20.206745 1870087 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0718 00:00:20.211389 1870087 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0718 00:00:20.211451 1870087 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0718 00:00:20.211516 1870087 kubeadm.go:404] StartCluster: {Name:multinode-451668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:00:20.211627 1870087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0718 00:00:20.211702 1870087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0718 00:00:20.255799 1870087 cri.go:89] found id: ""
	I0718 00:00:20.255872 1870087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 00:00:20.267783 1870087 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0718 00:00:20.267813 1870087 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0718 00:00:20.267823 1870087 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0718 00:00:20.267934 1870087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 00:00:20.279712 1870087 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0718 00:00:20.279797 1870087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 00:00:20.291030 1870087 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0718 00:00:20.291057 1870087 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0718 00:00:20.291067 1870087 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0718 00:00:20.291078 1870087 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 00:00:20.291108 1870087 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 00:00:20.291143 1870087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0718 00:00:20.347106 1870087 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0718 00:00:20.347174 1870087 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0718 00:00:20.347226 1870087 kubeadm.go:322] [preflight] Running pre-flight checks
	I0718 00:00:20.347272 1870087 command_runner.go:130] > [preflight] Running pre-flight checks
	I0718 00:00:20.393374 1870087 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0718 00:00:20.393404 1870087 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0718 00:00:20.393457 1870087 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0718 00:00:20.393465 1870087 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-aws
	I0718 00:00:20.393497 1870087 kubeadm.go:322] OS: Linux
	I0718 00:00:20.393507 1870087 command_runner.go:130] > OS: Linux
	I0718 00:00:20.393548 1870087 kubeadm.go:322] CGROUPS_CPU: enabled
	I0718 00:00:20.393560 1870087 command_runner.go:130] > CGROUPS_CPU: enabled
	I0718 00:00:20.393604 1870087 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0718 00:00:20.393613 1870087 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0718 00:00:20.393656 1870087 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0718 00:00:20.393666 1870087 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0718 00:00:20.393710 1870087 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0718 00:00:20.393719 1870087 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0718 00:00:20.393764 1870087 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0718 00:00:20.393776 1870087 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0718 00:00:20.393823 1870087 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0718 00:00:20.393833 1870087 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0718 00:00:20.393875 1870087 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0718 00:00:20.393883 1870087 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0718 00:00:20.393928 1870087 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0718 00:00:20.393936 1870087 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0718 00:00:20.393979 1870087 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0718 00:00:20.393987 1870087 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0718 00:00:20.474082 1870087 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 00:00:20.474110 1870087 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 00:00:20.474200 1870087 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 00:00:20.474209 1870087 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 00:00:20.474295 1870087 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 00:00:20.474304 1870087 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 00:00:20.734791 1870087 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 00:00:20.739040 1870087 out.go:204]   - Generating certificates and keys ...
	I0718 00:00:20.735133 1870087 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 00:00:20.739229 1870087 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0718 00:00:20.739248 1870087 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0718 00:00:20.739330 1870087 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0718 00:00:20.739357 1870087 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0718 00:00:21.057236 1870087 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 00:00:21.057268 1870087 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 00:00:22.239345 1870087 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0718 00:00:22.239373 1870087 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0718 00:00:22.889674 1870087 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0718 00:00:22.889751 1870087 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0718 00:00:23.504389 1870087 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0718 00:00:23.504416 1870087 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0718 00:00:23.648245 1870087 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0718 00:00:23.648334 1870087 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0718 00:00:23.648602 1870087 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-451668] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0718 00:00:23.648644 1870087 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-451668] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0718 00:00:24.460369 1870087 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0718 00:00:24.460395 1870087 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0718 00:00:24.460886 1870087 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-451668] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0718 00:00:24.460905 1870087 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-451668] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0718 00:00:24.746242 1870087 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 00:00:24.746267 1870087 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 00:00:25.236683 1870087 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 00:00:25.236708 1870087 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 00:00:25.432943 1870087 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0718 00:00:25.432974 1870087 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0718 00:00:25.433356 1870087 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 00:00:25.433379 1870087 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 00:00:26.097927 1870087 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 00:00:26.097951 1870087 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 00:00:26.402838 1870087 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 00:00:26.402864 1870087 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 00:00:27.047825 1870087 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 00:00:27.047851 1870087 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 00:00:27.579062 1870087 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 00:00:27.579092 1870087 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 00:00:27.590856 1870087 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 00:00:27.590892 1870087 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 00:00:27.592394 1870087 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 00:00:27.592420 1870087 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 00:00:27.592459 1870087 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0718 00:00:27.592468 1870087 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0718 00:00:27.695827 1870087 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 00:00:27.697995 1870087 out.go:204]   - Booting up control plane ...
	I0718 00:00:27.695977 1870087 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 00:00:27.698100 1870087 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 00:00:27.698112 1870087 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 00:00:27.699286 1870087 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 00:00:27.699302 1870087 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 00:00:27.700470 1870087 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 00:00:27.700488 1870087 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 00:00:27.701257 1870087 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 00:00:27.701273 1870087 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 00:00:27.703658 1870087 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0718 00:00:27.703678 1870087 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0718 00:00:35.706506 1870087 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002796 seconds
	I0718 00:00:35.706531 1870087 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002796 seconds
	I0718 00:00:35.706664 1870087 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 00:00:35.706672 1870087 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 00:00:35.722349 1870087 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 00:00:35.722379 1870087 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 00:00:36.246994 1870087 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 00:00:36.247018 1870087 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0718 00:00:36.247190 1870087 kubeadm.go:322] [mark-control-plane] Marking the node multinode-451668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 00:00:36.247196 1870087 command_runner.go:130] > [mark-control-plane] Marking the node multinode-451668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 00:00:36.758547 1870087 kubeadm.go:322] [bootstrap-token] Using token: mercvl.mmhlvzfkjc93gmco
	I0718 00:00:36.760133 1870087 out.go:204]   - Configuring RBAC rules ...
	I0718 00:00:36.758642 1870087 command_runner.go:130] > [bootstrap-token] Using token: mercvl.mmhlvzfkjc93gmco
	I0718 00:00:36.760248 1870087 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 00:00:36.760265 1870087 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 00:00:36.768496 1870087 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 00:00:36.768521 1870087 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 00:00:36.778734 1870087 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 00:00:36.778757 1870087 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 00:00:36.783360 1870087 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 00:00:36.783385 1870087 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 00:00:36.787538 1870087 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 00:00:36.787607 1870087 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 00:00:36.792809 1870087 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 00:00:36.792842 1870087 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 00:00:36.806395 1870087 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 00:00:36.806439 1870087 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 00:00:37.050309 1870087 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0718 00:00:37.050334 1870087 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0718 00:00:37.185378 1870087 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0718 00:00:37.185401 1870087 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0718 00:00:37.186386 1870087 kubeadm.go:322] 
	I0718 00:00:37.186471 1870087 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0718 00:00:37.186481 1870087 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0718 00:00:37.186486 1870087 kubeadm.go:322] 
	I0718 00:00:37.186558 1870087 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0718 00:00:37.186567 1870087 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0718 00:00:37.186571 1870087 kubeadm.go:322] 
	I0718 00:00:37.186595 1870087 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0718 00:00:37.186600 1870087 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0718 00:00:37.186655 1870087 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 00:00:37.186659 1870087 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 00:00:37.186706 1870087 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 00:00:37.186710 1870087 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 00:00:37.186714 1870087 kubeadm.go:322] 
	I0718 00:00:37.186765 1870087 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0718 00:00:37.186770 1870087 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0718 00:00:37.186774 1870087 kubeadm.go:322] 
	I0718 00:00:37.186819 1870087 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 00:00:37.186824 1870087 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 00:00:37.186828 1870087 kubeadm.go:322] 
	I0718 00:00:37.186877 1870087 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0718 00:00:37.186881 1870087 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0718 00:00:37.186951 1870087 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 00:00:37.186955 1870087 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 00:00:37.187018 1870087 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 00:00:37.187024 1870087 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 00:00:37.187028 1870087 kubeadm.go:322] 
	I0718 00:00:37.187107 1870087 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 00:00:37.187111 1870087 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0718 00:00:37.187182 1870087 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0718 00:00:37.187187 1870087 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0718 00:00:37.187191 1870087 kubeadm.go:322] 
	I0718 00:00:37.187270 1870087 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token mercvl.mmhlvzfkjc93gmco \
	I0718 00:00:37.187274 1870087 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token mercvl.mmhlvzfkjc93gmco \
	I0718 00:00:37.187370 1870087 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f \
	I0718 00:00:37.187375 1870087 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f \
	I0718 00:00:37.187393 1870087 kubeadm.go:322] 	--control-plane 
	I0718 00:00:37.187397 1870087 command_runner.go:130] > 	--control-plane 
	I0718 00:00:37.187401 1870087 kubeadm.go:322] 
	I0718 00:00:37.187489 1870087 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0718 00:00:37.187494 1870087 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0718 00:00:37.187506 1870087 kubeadm.go:322] 
	I0718 00:00:37.187583 1870087 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token mercvl.mmhlvzfkjc93gmco \
	I0718 00:00:37.187587 1870087 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mercvl.mmhlvzfkjc93gmco \
	I0718 00:00:37.187682 1870087 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f 
	I0718 00:00:37.187687 1870087 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f 
	I0718 00:00:37.191176 1870087 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0718 00:00:37.191218 1870087 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0718 00:00:37.191389 1870087 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 00:00:37.191412 1870087 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 00:00:37.191457 1870087 cni.go:84] Creating CNI manager for ""
	I0718 00:00:37.191465 1870087 cni.go:137] 1 nodes found, recommending kindnet
	I0718 00:00:37.194775 1870087 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 00:00:37.196616 1870087 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 00:00:37.202345 1870087 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0718 00:00:37.202368 1870087 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0718 00:00:37.202380 1870087 command_runner.go:130] > Device: 3ah/58d	Inode: 2083390     Links: 1
	I0718 00:00:37.202387 1870087 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 00:00:37.202394 1870087 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0718 00:00:37.202441 1870087 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0718 00:00:37.202453 1870087 command_runner.go:130] > Change: 2023-07-17 23:37:44.120234222 +0000
	I0718 00:00:37.202460 1870087 command_runner.go:130] >  Birth: 2023-07-17 23:37:44.076234663 +0000
	I0718 00:00:37.202902 1870087 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0718 00:00:37.202919 1870087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 00:00:37.255793 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 00:00:38.096200 1870087 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0718 00:00:38.105382 1870087 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0718 00:00:38.119835 1870087 command_runner.go:130] > serviceaccount/kindnet created
	I0718 00:00:38.133836 1870087 command_runner.go:130] > daemonset.apps/kindnet created
	I0718 00:00:38.139389 1870087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 00:00:38.139532 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:38.139597 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=multinode-451668 minikube.k8s.io/updated_at=2023_07_18T00_00_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:38.277191 1870087 command_runner.go:130] > node/multinode-451668 labeled
	I0718 00:00:38.280814 1870087 command_runner.go:130] > -16
	I0718 00:00:38.280848 1870087 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0718 00:00:38.280895 1870087 ops.go:34] apiserver oom_adj: -16
	I0718 00:00:38.280983 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:38.398680 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:38.899441 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:38.988978 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:39.399599 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:39.487414 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:39.898905 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:39.989570 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:40.398927 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:40.487487 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:40.899647 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:40.987774 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:41.399166 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:41.490632 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:41.899209 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:41.990034 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:42.399325 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:42.499885 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:42.899627 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:42.989142 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:43.399768 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:43.488074 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:43.899127 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:43.999981 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:44.399568 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:44.494206 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:44.899882 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:44.991413 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:45.398962 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:45.494176 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:45.899578 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:46.028822 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:46.399345 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:46.493041 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:46.899559 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:46.993929 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:47.399612 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:47.513534 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:47.898916 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:48.005568 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:48.399372 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:48.494780 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:48.899486 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:49.052857 1870087 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0718 00:00:49.399373 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 00:00:49.511585 1870087 command_runner.go:130] > NAME      SECRETS   AGE
	I0718 00:00:49.511604 1870087 command_runner.go:130] > default   0         0s
	I0718 00:00:49.511626 1870087 kubeadm.go:1081] duration metric: took 11.372138634s to wait for elevateKubeSystemPrivileges.
	I0718 00:00:49.511638 1870087 kubeadm.go:406] StartCluster complete in 29.300127378s
	I0718 00:00:49.511663 1870087 settings.go:142] acquiring lock: {Name:mk74b5b544da6acf33d2b75c01a65c483577bcd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:49.511726 1870087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:00:49.512576 1870087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-1800837/kubeconfig: {Name:mkabbac053a2a3ee682ab9031f228204945b972c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:00:49.513306 1870087 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:00:49.513614 1870087 kapi.go:59] client config for multinode-451668: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 00:00:49.514839 1870087 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:00:49.514898 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 00:00:49.515123 1870087 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0718 00:00:49.515224 1870087 addons.go:69] Setting storage-provisioner=true in profile "multinode-451668"
	I0718 00:00:49.515239 1870087 addons.go:231] Setting addon storage-provisioner=true in "multinode-451668"
	I0718 00:00:49.515298 1870087 host.go:66] Checking if "multinode-451668" exists ...
	I0718 00:00:49.515873 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:00:49.516799 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0718 00:00:49.516856 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:49.516899 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:49.516941 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:49.517264 1870087 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 00:00:49.517802 1870087 addons.go:69] Setting default-storageclass=true in profile "multinode-451668"
	I0718 00:00:49.517883 1870087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-451668"
	I0718 00:00:49.518351 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:00:49.565426 1870087 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0718 00:00:49.565452 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:49.565461 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:49 GMT
	I0718 00:00:49.565468 1870087 round_trippers.go:580]     Audit-Id: 023379a9-70cf-4cfa-84a1-1a1caca06082
	I0718 00:00:49.565474 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:49.565481 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:49.565488 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:49.565494 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:49.565501 1870087 round_trippers.go:580]     Content-Length: 291
	I0718 00:00:49.565528 1870087 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31e25b05-9d9a-48b7-ba3e-9797c0a06c06","resourceVersion":"357","creationTimestamp":"2023-07-18T00:00:37Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0718 00:00:49.565917 1870087 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31e25b05-9d9a-48b7-ba3e-9797c0a06c06","resourceVersion":"357","creationTimestamp":"2023-07-18T00:00:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0718 00:00:49.565965 1870087 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0718 00:00:49.565977 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:49.565986 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:49.565998 1870087 round_trippers.go:473]     Content-Type: application/json
	I0718 00:00:49.566005 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:49.576605 1870087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 00:00:49.578684 1870087 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 00:00:49.578702 1870087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 00:00:49.578769 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:49.575938 1870087 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:00:49.579320 1870087 kapi.go:59] client config for multinode-451668: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 00:00:49.579654 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 00:00:49.579662 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:49.579671 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:49.579678 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:49.593706 1870087 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0718 00:00:49.593731 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:49.593740 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:49.593752 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:49.593761 1870087 round_trippers.go:580]     Content-Length: 109
	I0718 00:00:49.593774 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:49 GMT
	I0718 00:00:49.593781 1870087 round_trippers.go:580]     Audit-Id: 83444c5a-d522-4bd1-9fed-f293c2d94102
	I0718 00:00:49.593788 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:49.593794 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:49.593815 1870087 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"364"},"items":[]}
	I0718 00:00:49.593834 1870087 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0718 00:00:49.593841 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:49.593847 1870087 round_trippers.go:580]     Audit-Id: f0d3cf75-4ac7-48be-99ed-dc142dfc8eee
	I0718 00:00:49.593854 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:49.593860 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:49.593866 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:49.593873 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:49.593880 1870087 round_trippers.go:580]     Content-Length: 291
	I0718 00:00:49.593886 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:49 GMT
	I0718 00:00:49.593905 1870087 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31e25b05-9d9a-48b7-ba3e-9797c0a06c06","resourceVersion":"364","creationTimestamp":"2023-07-18T00:00:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0718 00:00:49.594235 1870087 addons.go:231] Setting addon default-storageclass=true in "multinode-451668"
	I0718 00:00:49.594264 1870087 host.go:66] Checking if "multinode-451668" exists ...
	I0718 00:00:49.594775 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:00:49.613130 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:49.646658 1870087 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 00:00:49.646681 1870087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 00:00:49.646772 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:00:49.676698 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:00:49.779117 1870087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 00:00:49.819608 1870087 command_runner.go:130] > apiVersion: v1
	I0718 00:00:49.819630 1870087 command_runner.go:130] > data:
	I0718 00:00:49.819636 1870087 command_runner.go:130] >   Corefile: |
	I0718 00:00:49.819641 1870087 command_runner.go:130] >     .:53 {
	I0718 00:00:49.819646 1870087 command_runner.go:130] >         errors
	I0718 00:00:49.819651 1870087 command_runner.go:130] >         health {
	I0718 00:00:49.819658 1870087 command_runner.go:130] >            lameduck 5s
	I0718 00:00:49.819662 1870087 command_runner.go:130] >         }
	I0718 00:00:49.819667 1870087 command_runner.go:130] >         ready
	I0718 00:00:49.819676 1870087 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0718 00:00:49.819684 1870087 command_runner.go:130] >            pods insecure
	I0718 00:00:49.819693 1870087 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0718 00:00:49.819699 1870087 command_runner.go:130] >            ttl 30
	I0718 00:00:49.819708 1870087 command_runner.go:130] >         }
	I0718 00:00:49.819713 1870087 command_runner.go:130] >         prometheus :9153
	I0718 00:00:49.819719 1870087 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0718 00:00:49.819727 1870087 command_runner.go:130] >            max_concurrent 1000
	I0718 00:00:49.819731 1870087 command_runner.go:130] >         }
	I0718 00:00:49.819736 1870087 command_runner.go:130] >         cache 30
	I0718 00:00:49.819743 1870087 command_runner.go:130] >         loop
	I0718 00:00:49.819751 1870087 command_runner.go:130] >         reload
	I0718 00:00:49.819756 1870087 command_runner.go:130] >         loadbalance
	I0718 00:00:49.819761 1870087 command_runner.go:130] >     }
	I0718 00:00:49.819765 1870087 command_runner.go:130] > kind: ConfigMap
	I0718 00:00:49.819772 1870087 command_runner.go:130] > metadata:
	I0718 00:00:49.819781 1870087 command_runner.go:130] >   creationTimestamp: "2023-07-18T00:00:37Z"
	I0718 00:00:49.819790 1870087 command_runner.go:130] >   name: coredns
	I0718 00:00:49.819796 1870087 command_runner.go:130] >   namespace: kube-system
	I0718 00:00:49.819801 1870087 command_runner.go:130] >   resourceVersion: "259"
	I0718 00:00:49.819811 1870087 command_runner.go:130] >   uid: 82580ff4-f68d-4cf8-a6c9-c997282c937d
	I0718 00:00:49.819943 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 00:00:49.873351 1870087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 00:00:50.094867 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0718 00:00:50.095001 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:50.095042 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:50.095093 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:50.268550 1870087 round_trippers.go:574] Response Status: 200 OK in 173 milliseconds
	I0718 00:00:50.268622 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:50.268644 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:50.268667 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:50.268703 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:50.268731 1870087 round_trippers.go:580]     Content-Length: 291
	I0718 00:00:50.268753 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:50 GMT
	I0718 00:00:50.268790 1870087 round_trippers.go:580]     Audit-Id: aeff5f9f-c75c-44e4-861f-cf6b97036bc2
	I0718 00:00:50.268816 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:50.290224 1870087 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31e25b05-9d9a-48b7-ba3e-9797c0a06c06","resourceVersion":"374","creationTimestamp":"2023-07-18T00:00:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0718 00:00:50.290438 1870087 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-451668" context rescaled to 1 replicas
	I0718 00:00:50.290487 1870087 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0718 00:00:50.300620 1870087 out.go:177] * Verifying Kubernetes components...
	I0718 00:00:50.302328 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 00:00:50.837088 1870087 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0718 00:00:50.846882 1870087 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0718 00:00:50.859071 1870087 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0718 00:00:50.873294 1870087 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0718 00:00:50.887370 1870087 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0718 00:00:50.900282 1870087 command_runner.go:130] > pod/storage-provisioner created
	I0718 00:00:50.905660 1870087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.126470655s)
	I0718 00:00:50.905682 1870087 command_runner.go:130] > configmap/coredns replaced
	I0718 00:00:50.905837 1870087 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.085879399s)
	I0718 00:00:50.905870 1870087 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0718 00:00:50.905914 1870087 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0718 00:00:50.905964 1870087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.032552665s)
	I0718 00:00:50.906375 1870087 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:00:50.906680 1870087 kapi.go:59] client config for multinode-451668: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 00:00:50.907010 1870087 node_ready.go:35] waiting up to 6m0s for node "multinode-451668" to be "Ready" ...
	I0718 00:00:50.907093 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:50.907099 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:50.907108 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:50.907115 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:50.910517 1870087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 00:00:50.912175 1870087 addons.go:502] enable addons completed in 1.39703699s: enabled=[storage-provisioner default-storageclass]
	I0718 00:00:50.913380 1870087 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 00:00:50.913399 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:50.913408 1870087 round_trippers.go:580]     Audit-Id: 2a69168e-1fe8-43b7-8af8-1b2fc27d4653
	I0718 00:00:50.913415 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:50.913425 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:50.913437 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:50.913444 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:50.913455 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:50 GMT
	I0718 00:00:50.913801 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:51.415592 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:51.415616 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:51.415625 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:51.415634 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:51.418318 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:51.418343 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:51.418352 1870087 round_trippers.go:580]     Audit-Id: 3c7061be-7295-460f-929f-754fd7ea72ca
	I0718 00:00:51.418359 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:51.418366 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:51.418394 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:51.418401 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:51.418446 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:51 GMT
	I0718 00:00:51.418585 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:51.914740 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:51.914765 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:51.914777 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:51.914784 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:51.917394 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:51.917465 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:51.917481 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:51.917489 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:51.917500 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:51.917507 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:51.917514 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:51 GMT
	I0718 00:00:51.917521 1870087 round_trippers.go:580]     Audit-Id: 73d0c86c-4aa9-497a-84b4-fd834bb7d45f
	I0718 00:00:51.917612 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:52.415008 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:52.415032 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:52.415042 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:52.415051 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:52.417720 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:52.417743 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:52.417759 1870087 round_trippers.go:580]     Audit-Id: 73f7c822-67fd-4eb2-ba67-ea4677885515
	I0718 00:00:52.417766 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:52.417773 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:52.417780 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:52.417787 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:52.417793 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:52 GMT
	I0718 00:00:52.418237 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:52.914699 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:52.914719 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:52.914729 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:52.914736 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:52.917431 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:52.917451 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:52.917459 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:52 GMT
	I0718 00:00:52.917466 1870087 round_trippers.go:580]     Audit-Id: afdb17de-7d9e-4c2d-acb7-9fc9766e7f0f
	I0718 00:00:52.917473 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:52.917479 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:52.917486 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:52.917493 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:52.917876 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:52.918289 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:00:53.414762 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:53.414786 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:53.414796 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:53.414804 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:53.417423 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:53.417443 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:53.417452 1870087 round_trippers.go:580]     Audit-Id: 465a7224-8bc9-4626-a0bd-72d4a9407ef6
	I0718 00:00:53.417459 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:53.417465 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:53.417472 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:53.417479 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:53.417486 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:53 GMT
	I0718 00:00:53.417672 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:53.915216 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:53.915239 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:53.915251 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:53.915259 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:53.917866 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:53.917887 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:53.917895 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:53.917902 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:53.917909 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:53.917916 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:53 GMT
	I0718 00:00:53.917923 1870087 round_trippers.go:580]     Audit-Id: b1aa432e-0f82-411a-81c2-c6838870c7f6
	I0718 00:00:53.917929 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:53.918098 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:54.415636 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:54.415660 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:54.415671 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:54.415678 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:54.418355 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:54.418377 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:54.418386 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:54.418393 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:54.418400 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:54.418423 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:54 GMT
	I0718 00:00:54.418432 1870087 round_trippers.go:580]     Audit-Id: ed740300-0ae4-4777-9d35-5b6659697608
	I0718 00:00:54.418438 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:54.418655 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:54.914708 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:54.914732 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:54.914743 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:54.914751 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:54.917410 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:54.917438 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:54.917447 1870087 round_trippers.go:580]     Audit-Id: e90a363b-38b6-45ad-bf9a-07d92ac60306
	I0718 00:00:54.917454 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:54.917461 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:54.917467 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:54.917474 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:54.917482 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:54 GMT
	I0718 00:00:54.917598 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:55.415702 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:55.415724 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:55.415736 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:55.415743 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:55.418552 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:55.418581 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:55.418590 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:55.418597 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:55 GMT
	I0718 00:00:55.418606 1870087 round_trippers.go:580]     Audit-Id: 4e52a107-a881-420b-b0c9-4fa189eecf6b
	I0718 00:00:55.418613 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:55.418621 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:55.418628 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:55.418790 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:55.419205 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:00:55.914794 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:55.914817 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:55.914827 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:55.914834 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:55.917420 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:55.917442 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:55.917451 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:55.917459 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:55.917465 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:55 GMT
	I0718 00:00:55.917472 1870087 round_trippers.go:580]     Audit-Id: d7959fab-34f6-4e16-99bd-887ed946600e
	I0718 00:00:55.917478 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:55.917485 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:55.917830 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:56.415245 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:56.415269 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:56.415279 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:56.415288 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:56.417948 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:56.417975 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:56.417984 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:56.417991 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:56.417997 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:56.418005 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:56 GMT
	I0718 00:00:56.418015 1870087 round_trippers.go:580]     Audit-Id: e45ff6db-01f0-4a11-88d2-04672fda37c6
	I0718 00:00:56.418028 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:56.418152 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:56.915374 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:56.915397 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:56.915408 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:56.915423 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:56.917918 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:56.917938 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:56.917946 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:56.917953 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:56.917960 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:56.917966 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:56 GMT
	I0718 00:00:56.917973 1870087 round_trippers.go:580]     Audit-Id: 77cbefda-3512-4f00-be91-00ac2b58c6fc
	I0718 00:00:56.917979 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:56.918130 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:57.414712 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:57.414737 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:57.414747 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:57.414755 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:57.417511 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:57.417533 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:57.417542 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:57 GMT
	I0718 00:00:57.417549 1870087 round_trippers.go:580]     Audit-Id: ce1cc454-fceb-4bce-9e78-238e2239fc8d
	I0718 00:00:57.417555 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:57.417562 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:57.417568 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:57.417577 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:57.417705 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:57.914701 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:57.914726 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:57.914736 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:57.914744 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:57.917561 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:57.917586 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:57.917595 1870087 round_trippers.go:580]     Audit-Id: 45abb118-6a9b-4bb8-bc57-5613d1def204
	I0718 00:00:57.917602 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:57.917609 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:57.917615 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:57.917622 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:57.917628 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:57 GMT
	I0718 00:00:57.917825 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:57.918259 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:00:58.414913 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:58.414937 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:58.414947 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:58.414955 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:58.417604 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:58.417624 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:58.417633 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:58 GMT
	I0718 00:00:58.417640 1870087 round_trippers.go:580]     Audit-Id: 567560e6-6185-4b78-939f-34b085522b39
	I0718 00:00:58.417646 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:58.417653 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:58.417659 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:58.417667 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:58.417792 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:58.915364 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:58.915388 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:58.915399 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:58.915414 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:58.917818 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:58.917842 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:58.917850 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:58.917857 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:58.917864 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:58 GMT
	I0718 00:00:58.917871 1870087 round_trippers.go:580]     Audit-Id: e612fdaf-619e-4252-96d4-90f6d42dbf3c
	I0718 00:00:58.917878 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:58.917886 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:58.918027 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:59.415095 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:59.415123 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:59.415134 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:59.415142 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:59.417735 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:59.417760 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:59.417769 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:59.417775 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:59.417783 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:59.417790 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:59 GMT
	I0718 00:00:59.417798 1870087 round_trippers.go:580]     Audit-Id: 42978acc-a872-4d22-9c19-6ba63be6f944
	I0718 00:00:59.417804 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:59.418005 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:59.915278 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:00:59.915300 1870087 round_trippers.go:469] Request Headers:
	I0718 00:00:59.915309 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:00:59.915317 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:00:59.917826 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:00:59.917847 1870087 round_trippers.go:577] Response Headers:
	I0718 00:00:59.917856 1870087 round_trippers.go:580]     Audit-Id: 79bf3a7c-bb07-452a-a889-626ff7c79c27
	I0718 00:00:59.917863 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:00:59.917869 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:00:59.917876 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:00:59.917883 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:00:59.917890 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:00:59 GMT
	I0718 00:00:59.918001 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:00:59.918438 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:00.414827 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:00.414852 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:00.414863 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:00.414871 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:00.417684 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:00.417706 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:00.417716 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:00.417723 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:00 GMT
	I0718 00:01:00.417730 1870087 round_trippers.go:580]     Audit-Id: b144e09d-41af-4004-88a9-2f13d0d82f8a
	I0718 00:01:00.417737 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:00.417744 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:00.417751 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:00.417972 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:00.915619 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:00.915656 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:00.915667 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:00.915676 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:00.918243 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:00.918264 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:00.918272 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:00.918280 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:00 GMT
	I0718 00:01:00.918287 1870087 round_trippers.go:580]     Audit-Id: f3b999f4-80ec-41e2-9e62-4c5a7b8494ee
	I0718 00:01:00.918293 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:00.918300 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:00.918308 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:00.918487 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:01.414744 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:01.414770 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:01.414781 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:01.414789 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:01.417527 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:01.417549 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:01.417558 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:01 GMT
	I0718 00:01:01.417564 1870087 round_trippers.go:580]     Audit-Id: 8610ae5e-ff71-404b-b492-38e82e04d4ff
	I0718 00:01:01.417571 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:01.417577 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:01.417584 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:01.417591 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:01.417726 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:01.914814 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:01.914841 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:01.914851 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:01.914859 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:01.917504 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:01.917526 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:01.917535 1870087 round_trippers.go:580]     Audit-Id: f445b63a-35ba-4681-bba1-059a8d729874
	I0718 00:01:01.917543 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:01.917550 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:01.917556 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:01.917563 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:01.917570 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:01 GMT
	I0718 00:01:01.917921 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:02.415064 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:02.415092 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:02.415103 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:02.415110 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:02.417958 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:02.417985 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:02.417995 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:02.418002 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:02.418009 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:02.418016 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:02 GMT
	I0718 00:01:02.418023 1870087 round_trippers.go:580]     Audit-Id: 51c5548b-5429-4ba4-bfcb-145fc6f82c81
	I0718 00:01:02.418029 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:02.418657 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:02.419074 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:02.915303 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:02.915342 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:02.915353 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:02.915360 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:02.917775 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:02.917796 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:02.917807 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:02.917814 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:02 GMT
	I0718 00:01:02.917821 1870087 round_trippers.go:580]     Audit-Id: f76591db-8dd2-4528-b80e-fadfe267fc03
	I0718 00:01:02.917830 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:02.917865 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:02.917871 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:02.917997 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:03.415152 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:03.415178 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:03.415191 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:03.415199 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:03.418240 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:03.418268 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:03.418278 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:03.418285 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:03.418292 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:03 GMT
	I0718 00:01:03.418299 1870087 round_trippers.go:580]     Audit-Id: a7de9fd0-c423-4307-a890-ea53de8ef029
	I0718 00:01:03.418306 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:03.418313 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:03.418792 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:03.915468 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:03.915493 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:03.915504 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:03.915512 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:03.918079 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:03.918104 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:03.918113 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:03.918120 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:03.918127 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:03.918133 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:03.918140 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:03 GMT
	I0718 00:01:03.918146 1870087 round_trippers.go:580]     Audit-Id: 48ff52b8-a4d4-416d-9c62-cc7e50c9add0
	I0718 00:01:03.918479 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:04.415562 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:04.415585 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:04.415596 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:04.415604 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:04.418081 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:04.418109 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:04.418119 1870087 round_trippers.go:580]     Audit-Id: 92c22ba4-f1eb-43c8-a146-3fd65a2838b9
	I0718 00:01:04.418126 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:04.418132 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:04.418139 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:04.418146 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:04.418153 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:04 GMT
	I0718 00:01:04.418298 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:04.915464 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:04.915487 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:04.915502 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:04.915509 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:04.918467 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:04.918495 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:04.918504 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:04.918511 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:04.918518 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:04.918525 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:04 GMT
	I0718 00:01:04.918533 1870087 round_trippers.go:580]     Audit-Id: c25d6b21-71d8-4b24-9bb7-5c8bd6186211
	I0718 00:01:04.918540 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:04.918702 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:04.919105 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:05.414772 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:05.414798 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:05.414809 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:05.414818 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:05.417661 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:05.417681 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:05.417690 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:05.417697 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:05.417703 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:05.417710 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:05 GMT
	I0718 00:01:05.417717 1870087 round_trippers.go:580]     Audit-Id: 11c5a203-f873-4481-9084-6f7d145bc4c4
	I0718 00:01:05.417724 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:05.417850 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:05.914685 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:05.914726 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:05.914736 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:05.914744 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:05.917284 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:05.917308 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:05.917317 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:05.917324 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:05 GMT
	I0718 00:01:05.917331 1870087 round_trippers.go:580]     Audit-Id: 20d490e3-2688-4366-b652-bddd6feea86b
	I0718 00:01:05.917337 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:05.917344 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:05.917351 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:05.917441 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:06.415285 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:06.415310 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:06.415320 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:06.415327 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:06.418029 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:06.418058 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:06.418071 1870087 round_trippers.go:580]     Audit-Id: d1cf9948-967f-4a87-a460-ea7a17b1eb63
	I0718 00:01:06.418079 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:06.418086 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:06.418094 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:06.418104 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:06.418120 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:06 GMT
	I0718 00:01:06.418390 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:06.915609 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:06.915632 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:06.915643 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:06.915651 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:06.918096 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:06.918117 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:06.918125 1870087 round_trippers.go:580]     Audit-Id: ae6e2dd5-56bf-430c-a3af-06a153ae3da5
	I0718 00:01:06.918134 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:06.918141 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:06.918147 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:06.918153 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:06.918160 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:06 GMT
	I0718 00:01:06.918324 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:07.415619 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:07.415645 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:07.415655 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:07.415663 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:07.418394 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:07.418482 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:07.418498 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:07.418506 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:07.418517 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:07.418524 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:07.418532 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:07 GMT
	I0718 00:01:07.418562 1870087 round_trippers.go:580]     Audit-Id: 1a1c7d50-7572-4c22-ad6b-b7a905fa9a84
	I0718 00:01:07.418685 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:07.419125 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:07.915630 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:07.915652 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:07.915663 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:07.915672 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:07.918219 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:07.918240 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:07.918249 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:07.918255 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:07.918262 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:07.918269 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:07.918276 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:07 GMT
	I0718 00:01:07.918282 1870087 round_trippers.go:580]     Audit-Id: 2c5caa2e-db5d-4ac0-8b08-adefe2bff567
	I0718 00:01:07.918365 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:08.414736 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:08.414761 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:08.414771 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:08.414778 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:08.417480 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:08.417506 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:08.417516 1870087 round_trippers.go:580]     Audit-Id: e65b8623-bd47-40e1-95a6-e5bf1f67aab1
	I0718 00:01:08.417523 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:08.417530 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:08.417537 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:08.417548 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:08.417555 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:08 GMT
	I0718 00:01:08.417696 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:08.914782 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:08.914815 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:08.914833 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:08.914841 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:08.917573 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:08.917606 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:08.917616 1870087 round_trippers.go:580]     Audit-Id: 9f837e9f-de05-4693-a14c-defb0a08a207
	I0718 00:01:08.917623 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:08.917630 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:08.917637 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:08.917649 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:08.917662 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:08 GMT
	I0718 00:01:08.917977 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:09.415585 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:09.415606 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:09.415617 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:09.415625 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:09.418091 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:09.418111 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:09.418119 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:09.418126 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:09.418133 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:09.418139 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:09.418146 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:09 GMT
	I0718 00:01:09.418152 1870087 round_trippers.go:580]     Audit-Id: f2909d70-4259-40f1-b98b-10c46d49ce96
	I0718 00:01:09.418326 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:09.915294 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:09.915317 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:09.915328 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:09.915338 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:09.917788 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:09.917813 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:09.917826 1870087 round_trippers.go:580]     Audit-Id: 9a9d06e5-fa22-4c45-883d-b10c50af8be2
	I0718 00:01:09.917833 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:09.917841 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:09.917847 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:09.917863 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:09.917870 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:09 GMT
	I0718 00:01:09.918103 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:09.918534 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:10.415248 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:10.415267 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:10.415277 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:10.415284 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:10.418183 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:10.418204 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:10.418213 1870087 round_trippers.go:580]     Audit-Id: 776a49c6-6127-4072-9bc5-f953f7a6ae07
	I0718 00:01:10.418219 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:10.418226 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:10.418232 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:10.418239 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:10.418246 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:10 GMT
	I0718 00:01:10.418382 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:10.914742 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:10.914769 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:10.914781 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:10.914789 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:10.917795 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:10.917816 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:10.917826 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:10 GMT
	I0718 00:01:10.917833 1870087 round_trippers.go:580]     Audit-Id: 726836a6-c503-4917-8326-908350e00629
	I0718 00:01:10.917840 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:10.917847 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:10.917854 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:10.917861 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:10.917965 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:11.415352 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:11.415374 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:11.415384 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:11.415391 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:11.418199 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:11.418221 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:11.418230 1870087 round_trippers.go:580]     Audit-Id: ab650cfd-3826-4172-a526-43db56119f74
	I0718 00:01:11.418237 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:11.418260 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:11.418270 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:11.418277 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:11.418288 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:11 GMT
	I0718 00:01:11.418444 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:11.914761 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:11.914785 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:11.914795 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:11.914804 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:11.917456 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:11.917491 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:11.917501 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:11.917508 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:11.917515 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:11 GMT
	I0718 00:01:11.917521 1870087 round_trippers.go:580]     Audit-Id: 5c8d43fb-da91-4755-b92f-a181d29559f4
	I0718 00:01:11.917528 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:11.917534 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:11.917631 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:12.415702 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:12.415727 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:12.415738 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:12.415746 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:12.418451 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:12.418477 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:12.418486 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:12.418493 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:12.418500 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:12.418507 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:12.418514 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:12 GMT
	I0718 00:01:12.418525 1870087 round_trippers.go:580]     Audit-Id: 813ba221-dcf4-4160-a8d5-0d1a47914f04
	I0718 00:01:12.418651 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:12.419053 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:12.914735 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:12.914761 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:12.914772 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:12.914779 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:12.917405 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:12.917425 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:12.917434 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:12.917440 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:12.917447 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:12 GMT
	I0718 00:01:12.917454 1870087 round_trippers.go:580]     Audit-Id: f2962be3-c18e-4a2f-8725-105a34d1a3c7
	I0718 00:01:12.917461 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:12.917467 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:12.917631 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:13.414981 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:13.415004 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:13.415015 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:13.415023 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:13.417597 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:13.417622 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:13.417631 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:13.417637 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:13.417644 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:13.417650 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:13 GMT
	I0718 00:01:13.417658 1870087 round_trippers.go:580]     Audit-Id: 3cf23e27-bd48-4295-b092-ab1b3e68d44a
	I0718 00:01:13.417667 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:13.417818 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:13.914860 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:13.914900 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:13.914911 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:13.914919 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:13.917539 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:13.917564 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:13.917573 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:13 GMT
	I0718 00:01:13.917580 1870087 round_trippers.go:580]     Audit-Id: 7e24bf45-0644-430d-89a7-330214a5f625
	I0718 00:01:13.917586 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:13.917596 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:13.917603 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:13.917614 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:13.917722 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:14.414782 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:14.414805 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:14.414816 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:14.414824 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:14.417563 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:14.417591 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:14.417601 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:14.417610 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:14.417617 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:14.417624 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:14 GMT
	I0718 00:01:14.417631 1870087 round_trippers.go:580]     Audit-Id: 64b69690-dfcf-4273-8f6c-5792727857b3
	I0718 00:01:14.417637 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:14.417763 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:14.914757 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:14.914778 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:14.914788 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:14.914795 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:14.917524 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:14.917545 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:14.917553 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:14.917559 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:14 GMT
	I0718 00:01:14.917566 1870087 round_trippers.go:580]     Audit-Id: aae61f8e-f1f0-4fb6-a826-1b0fc3a88019
	I0718 00:01:14.917572 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:14.917579 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:14.917586 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:14.917699 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:14.918085 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:15.414700 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:15.414724 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:15.414735 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:15.414743 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:15.417617 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:15.417643 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:15.417652 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:15.417659 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:15.417666 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:15.417675 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:15 GMT
	I0718 00:01:15.417681 1870087 round_trippers.go:580]     Audit-Id: 9d657b0b-27e0-4d72-99f0-4fee20832c00
	I0718 00:01:15.417689 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:15.418147 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:15.915210 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:15.915232 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:15.915243 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:15.915250 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:15.917763 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:15.917786 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:15.917795 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:15.917802 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:15.917809 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:15 GMT
	I0718 00:01:15.917815 1870087 round_trippers.go:580]     Audit-Id: aed9c221-b7d0-4124-90c8-859a394d79d4
	I0718 00:01:15.917822 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:15.917837 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:15.918008 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:16.414824 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:16.414849 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:16.414859 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:16.414867 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:16.417617 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:16.417638 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:16.417646 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:16.417653 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:16.417659 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:16.417666 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:16.417672 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:16 GMT
	I0718 00:01:16.417679 1870087 round_trippers.go:580]     Audit-Id: 9305c58d-bf3d-4f49-856d-05c35cbbf789
	I0718 00:01:16.417951 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:16.914723 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:16.914758 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:16.914769 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:16.914777 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:16.917329 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:16.917352 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:16.917361 1870087 round_trippers.go:580]     Audit-Id: 0f7819e0-33fd-45d5-ac0c-d648a0a0e12c
	I0718 00:01:16.917430 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:16.917440 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:16.917447 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:16.917454 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:16.917461 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:16 GMT
	I0718 00:01:16.917665 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:17.414789 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:17.414808 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:17.414817 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:17.414825 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:17.418779 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:17.418804 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:17.418813 1870087 round_trippers.go:580]     Audit-Id: d3213dce-6449-4d0c-9d09-a0111005695f
	I0718 00:01:17.418820 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:17.418827 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:17.418834 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:17.418840 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:17.418847 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:17 GMT
	I0718 00:01:17.419007 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:17.419420 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:17.915062 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:17.915083 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:17.915093 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:17.915101 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:17.917503 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:17.917523 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:17.917531 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:17 GMT
	I0718 00:01:17.917538 1870087 round_trippers.go:580]     Audit-Id: 9b5ef999-709f-4525-959a-7e4212846349
	I0718 00:01:17.917545 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:17.917551 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:17.917558 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:17.917571 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:17.917832 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:18.414915 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:18.414937 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:18.414948 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:18.414955 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:18.417527 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:18.417547 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:18.417556 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:18 GMT
	I0718 00:01:18.417563 1870087 round_trippers.go:580]     Audit-Id: 970d05d7-6dee-4fa0-9cb1-b621e79df82f
	I0718 00:01:18.417569 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:18.417576 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:18.417582 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:18.417589 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:18.417784 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:18.915102 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:18.915127 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:18.915138 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:18.915146 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:18.918132 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:18.918154 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:18.918164 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:18.918171 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:18.918179 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:18 GMT
	I0718 00:01:18.918195 1870087 round_trippers.go:580]     Audit-Id: a8afc8ef-e9b9-43e7-804e-fb58d6374cb0
	I0718 00:01:18.918213 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:18.918220 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:18.918325 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:19.415305 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:19.415329 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:19.415341 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:19.415348 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:19.418036 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:19.418069 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:19.418079 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:19.418086 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:19.418093 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:19.418102 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:19 GMT
	I0718 00:01:19.418112 1870087 round_trippers.go:580]     Audit-Id: 81d4547c-ee47-4206-af15-69297e01d209
	I0718 00:01:19.418124 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:19.418389 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:19.915332 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:19.915368 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:19.915381 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:19.915388 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:19.917902 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:19.917931 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:19.917940 1870087 round_trippers.go:580]     Audit-Id: ea04e1e7-e28d-40a6-a54f-f431248355c7
	I0718 00:01:19.917947 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:19.917953 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:19.917960 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:19.917967 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:19.917974 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:19 GMT
	I0718 00:01:19.918108 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:19.918541 1870087 node_ready.go:58] node "multinode-451668" has status "Ready":"False"
	I0718 00:01:20.415342 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:20.415365 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:20.415376 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:20.415384 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:20.418260 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:20.418288 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:20.418299 1870087 round_trippers.go:580]     Audit-Id: bb5df671-bf55-4258-ba26-1f05ee6e6ab0
	I0718 00:01:20.418306 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:20.418313 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:20.418320 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:20.418327 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:20.418334 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:20 GMT
	I0718 00:01:20.418476 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:20.915669 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:20.915690 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:20.915700 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:20.915709 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:20.918625 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:20.918716 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:20.918735 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:20 GMT
	I0718 00:01:20.918746 1870087 round_trippers.go:580]     Audit-Id: 2bc6906a-2865-4eee-8ed7-202828e0c443
	I0718 00:01:20.918767 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:20.918780 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:20.918787 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:20.918794 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:20.918915 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:21.415560 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:21.415580 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:21.415590 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:21.415598 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:21.418187 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:21.418214 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:21.418223 1870087 round_trippers.go:580]     Audit-Id: e1c38b26-117d-4b81-b1ac-b9886e598569
	I0718 00:01:21.418230 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:21.418237 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:21.418244 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:21.418250 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:21.418260 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:21 GMT
	I0718 00:01:21.418404 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"371","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0718 00:01:21.915084 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:21.915108 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:21.915118 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:21.915126 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:21.917785 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:21.917815 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:21.917824 1870087 round_trippers.go:580]     Audit-Id: 4e5fc1b3-22cb-4efc-bff1-cfbae8ac4507
	I0718 00:01:21.917831 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:21.917837 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:21.917844 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:21.917850 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:21.917864 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:21 GMT
	I0718 00:01:21.917983 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:21.918371 1870087 node_ready.go:49] node "multinode-451668" has status "Ready":"True"
	I0718 00:01:21.918388 1870087 node_ready.go:38] duration metric: took 31.011364289s waiting for node "multinode-451668" to be "Ready" ...
	I0718 00:01:21.918397 1870087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 00:01:21.918478 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0718 00:01:21.918489 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:21.918499 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:21.918506 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:21.922629 1870087 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0718 00:01:21.922651 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:21.922660 1870087 round_trippers.go:580]     Audit-Id: 1ddfaf85-cdf8-46a0-b286-7a6006708a15
	I0718 00:01:21.922666 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:21.922674 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:21.922681 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:21.922687 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:21.922697 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:21 GMT
	I0718 00:01:21.925421 1870087 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"443","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0718 00:01:21.929551 1870087 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qvgbw" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:21.929646 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qvgbw
	I0718 00:01:21.929658 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:21.929668 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:21.929679 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:21.938785 1870087 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0718 00:01:21.938809 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:21.938828 1870087 round_trippers.go:580]     Audit-Id: ac9cf5a7-8fdf-48ea-b447-9f8cb9142a9c
	I0718 00:01:21.938835 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:21.938842 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:21.938851 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:21.938863 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:21.938869 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:21 GMT
	I0718 00:01:21.939287 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"443","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0718 00:01:21.939821 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:21.939838 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:21.939848 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:21.939856 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:21.943622 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:21.943644 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:21.943652 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:21.943659 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:21.943681 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:21.943694 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:21.943700 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:21 GMT
	I0718 00:01:21.943715 1870087 round_trippers.go:580]     Audit-Id: 1c6f2b7d-cbbd-4ffe-b3c4-5fff44cab6ef
	I0718 00:01:21.944111 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:22.445275 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qvgbw
	I0718 00:01:22.445300 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.445310 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.445317 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.448397 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:22.448432 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.448442 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.448450 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.448473 1870087 round_trippers.go:580]     Audit-Id: 66029e9c-1d07-45e8-bea9-8f17a67b3d29
	I0718 00:01:22.448485 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.448492 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.448504 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.448879 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"455","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0718 00:01:22.449437 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:22.449456 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.449466 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.449474 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.452039 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.452091 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.452124 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.452146 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.452180 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.452194 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.452202 1870087 round_trippers.go:580]     Audit-Id: 2546907a-9ae8-4872-9efb-2a87e5d54bb1
	I0718 00:01:22.452209 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.452380 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:22.452834 1870087 pod_ready.go:92] pod "coredns-5d78c9869d-qvgbw" in "kube-system" namespace has status "Ready":"True"
	I0718 00:01:22.452850 1870087 pod_ready.go:81] duration metric: took 523.267216ms waiting for pod "coredns-5d78c9869d-qvgbw" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.452881 1870087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.452960 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-451668
	I0718 00:01:22.452971 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.452981 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.452991 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.455666 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.455721 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.455757 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.455782 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.455805 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.455835 1870087 round_trippers.go:580]     Audit-Id: d346e377-bbf6-4b69-a6ff-ba02e5c9623a
	I0718 00:01:22.455843 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.455850 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.456002 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-451668","namespace":"kube-system","uid":"ff35a53d-a680-4948-89ae-4b41390d5766","resourceVersion":"429","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"7dfc83176ac111a8be324df9a81beceb","kubernetes.io/config.mirror":"7dfc83176ac111a8be324df9a81beceb","kubernetes.io/config.seen":"2023-07-18T00:00:37.124825456Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0718 00:01:22.456537 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:22.456554 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.456565 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.456572 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.459022 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.459089 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.459114 1870087 round_trippers.go:580]     Audit-Id: 11acd673-3190-456d-9910-89ab4e278788
	I0718 00:01:22.459134 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.459148 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.459155 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.459163 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.459169 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.459493 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:22.459923 1870087 pod_ready.go:92] pod "etcd-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:01:22.459942 1870087 pod_ready.go:81] duration metric: took 7.053459ms waiting for pod "etcd-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.459959 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.460020 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-451668
	I0718 00:01:22.460030 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.460038 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.460046 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.462797 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.462853 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.462892 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.462918 1870087 round_trippers.go:580]     Audit-Id: 46d57f95-fe48-4407-9afa-18f03d8be1eb
	I0718 00:01:22.462933 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.462940 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.462947 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.462973 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.463137 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-451668","namespace":"kube-system","uid":"67421618-9334-4da3-b70c-4df5028a3e13","resourceVersion":"426","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b21d9f1735bd99f21ac6a561db59b8b7","kubernetes.io/config.mirror":"b21d9f1735bd99f21ac6a561db59b8b7","kubernetes.io/config.seen":"2023-07-18T00:00:37.124827679Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0718 00:01:22.463727 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:22.463745 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.463754 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.463762 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.467332 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:22.467357 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.467366 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.467373 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.467406 1870087 round_trippers.go:580]     Audit-Id: 16b6543e-a65a-42f2-9103-93f910cf50dd
	I0718 00:01:22.467424 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.467432 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.467438 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.467564 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:22.467987 1870087 pod_ready.go:92] pod "kube-apiserver-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:01:22.468004 1870087 pod_ready.go:81] duration metric: took 8.03754ms waiting for pod "kube-apiserver-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.468016 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.468081 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-451668
	I0718 00:01:22.468091 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.468099 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.468106 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.470821 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.470896 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.470913 1870087 round_trippers.go:580]     Audit-Id: 64ebe3d3-ae1d-4837-8c67-da63cc3dc378
	I0718 00:01:22.470921 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.470928 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.470934 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.470941 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.470959 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.471121 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-451668","namespace":"kube-system","uid":"873eb02d-decd-42d6-a94b-e93f4248f3b8","resourceVersion":"427","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e03e0ec870f5198b407028f8bd83bcde","kubernetes.io/config.mirror":"e03e0ec870f5198b407028f8bd83bcde","kubernetes.io/config.seen":"2023-07-18T00:00:37.124818055Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0718 00:01:22.471709 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:22.471726 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.471735 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.471743 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.474170 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.474234 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.474250 1870087 round_trippers.go:580]     Audit-Id: f23e6178-eb31-476a-bbf3-034592f85c96
	I0718 00:01:22.474258 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.474264 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.474271 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.474278 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.474289 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.474471 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:22.474918 1870087 pod_ready.go:92] pod "kube-controller-manager-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:01:22.474936 1870087 pod_ready.go:81] duration metric: took 6.908599ms waiting for pod "kube-controller-manager-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.474949 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7knpj" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.515196 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7knpj
	I0718 00:01:22.515243 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.515306 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.515336 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.518261 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.518328 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.518354 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.518384 1870087 round_trippers.go:580]     Audit-Id: 5203b4a7-49db-4a1a-a556-2d02623ba1ca
	I0718 00:01:22.518431 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.518461 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.518484 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.518507 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.518664 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7knpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6cebdce-80d9-4b8b-8ea5-415bb18d1f07","resourceVersion":"420","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6f75b6d3-9814-4f6b-8118-2be5ffd5c4e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f75b6d3-9814-4f6b-8118-2be5ffd5c4e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0718 00:01:22.715610 1870087 request.go:628] Waited for 196.399539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:22.715708 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:22.715740 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.715756 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.715764 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.718541 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.718564 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.718574 1870087 round_trippers.go:580]     Audit-Id: ecc0bafe-f92a-41f5-af3a-3fd5179ac67b
	I0718 00:01:22.718589 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.718612 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.718624 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.718631 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.718638 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.718770 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:22.719197 1870087 pod_ready.go:92] pod "kube-proxy-7knpj" in "kube-system" namespace has status "Ready":"True"
	I0718 00:01:22.719214 1870087 pod_ready.go:81] duration metric: took 244.255649ms waiting for pod "kube-proxy-7knpj" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.719226 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:22.915660 1870087 request.go:628] Waited for 196.364691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-451668
	I0718 00:01:22.915730 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-451668
	I0718 00:01:22.915740 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:22.915750 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:22.915758 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:22.918519 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:22.918541 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:22.918551 1870087 round_trippers.go:580]     Audit-Id: 9365a3fb-15bd-436d-bf97-02a61891c28b
	I0718 00:01:22.918561 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:22.918568 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:22.918575 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:22.918582 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:22.918593 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:22 GMT
	I0718 00:01:22.918723 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-451668","namespace":"kube-system","uid":"be313f6d-3c25-4ace-a780-aa89145c91c2","resourceVersion":"428","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9065ff40d6e81cfa36e7ba470cd8a37f","kubernetes.io/config.mirror":"9065ff40d6e81cfa36e7ba470cd8a37f","kubernetes.io/config.seen":"2023-07-18T00:00:37.124823954Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0718 00:01:23.115324 1870087 request.go:628] Waited for 196.174335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:23.115424 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:01:23.115476 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:23.115493 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:23.115501 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:23.118097 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:23.118122 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:23.118130 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:23.118137 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:23.118144 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:23.118151 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:23.118158 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:23 GMT
	I0718 00:01:23.118164 1870087 round_trippers.go:580]     Audit-Id: 99d04c14-ea71-4b98-8fae-25642910f4b4
	I0718 00:01:23.118275 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:01:23.118697 1870087 pod_ready.go:92] pod "kube-scheduler-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:01:23.118721 1870087 pod_ready.go:81] duration metric: took 399.485755ms waiting for pod "kube-scheduler-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:01:23.118734 1870087 pod_ready.go:38] duration metric: took 1.200303055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 00:01:23.118758 1870087 api_server.go:52] waiting for apiserver process to appear ...
	I0718 00:01:23.118841 1870087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 00:01:23.131088 1870087 command_runner.go:130] > 1252
	I0718 00:01:23.132546 1870087 api_server.go:72] duration metric: took 32.841998511s to wait for apiserver process to appear ...
	I0718 00:01:23.132575 1870087 api_server.go:88] waiting for apiserver healthz status ...
	I0718 00:01:23.132593 1870087 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0718 00:01:23.142636 1870087 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0718 00:01:23.142707 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0718 00:01:23.142718 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:23.142728 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:23.142736 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:23.143915 1870087 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 00:01:23.143942 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:23.143950 1870087 round_trippers.go:580]     Audit-Id: 5a21f9e0-78c4-4d38-8ba8-bf68402ec619
	I0718 00:01:23.143958 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:23.143965 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:23.143977 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:23.143988 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:23.143995 1870087 round_trippers.go:580]     Content-Length: 263
	I0718 00:01:23.144001 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:23 GMT
	I0718 00:01:23.144018 1870087 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0718 00:01:23.144098 1870087 api_server.go:141] control plane version: v1.27.3
	I0718 00:01:23.144113 1870087 api_server.go:131] duration metric: took 11.53197ms to wait for apiserver health ...
	I0718 00:01:23.144121 1870087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 00:01:23.315541 1870087 request.go:628] Waited for 171.358198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0718 00:01:23.315618 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0718 00:01:23.315629 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:23.315639 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:23.315646 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:23.319302 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:23.319328 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:23.319337 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:23 GMT
	I0718 00:01:23.319344 1870087 round_trippers.go:580]     Audit-Id: 04b9b285-7be5-4376-b5b0-116cdb13eb17
	I0718 00:01:23.319350 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:23.319357 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:23.319364 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:23.319371 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:23.320139 1870087 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"455","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0718 00:01:23.322595 1870087 system_pods.go:59] 8 kube-system pods found
	I0718 00:01:23.322625 1870087 system_pods.go:61] "coredns-5d78c9869d-qvgbw" [9d2a4d36-002a-4117-b0ec-2c58b2b7249b] Running
	I0718 00:01:23.322632 1870087 system_pods.go:61] "etcd-multinode-451668" [ff35a53d-a680-4948-89ae-4b41390d5766] Running
	I0718 00:01:23.322637 1870087 system_pods.go:61] "kindnet-jcxjg" [99d30dc5-6047-4fb1-abd0-ddb9c8729969] Running
	I0718 00:01:23.322642 1870087 system_pods.go:61] "kube-apiserver-multinode-451668" [67421618-9334-4da3-b70c-4df5028a3e13] Running
	I0718 00:01:23.322648 1870087 system_pods.go:61] "kube-controller-manager-multinode-451668" [873eb02d-decd-42d6-a94b-e93f4248f3b8] Running
	I0718 00:01:23.322659 1870087 system_pods.go:61] "kube-proxy-7knpj" [e6cebdce-80d9-4b8b-8ea5-415bb18d1f07] Running
	I0718 00:01:23.322664 1870087 system_pods.go:61] "kube-scheduler-multinode-451668" [be313f6d-3c25-4ace-a780-aa89145c91c2] Running
	I0718 00:01:23.322672 1870087 system_pods.go:61] "storage-provisioner" [e1ba839b-7dba-4b50-9c64-851459ea7287] Running
	I0718 00:01:23.322677 1870087 system_pods.go:74] duration metric: took 178.552342ms to wait for pod list to return data ...
	I0718 00:01:23.322689 1870087 default_sa.go:34] waiting for default service account to be created ...
	I0718 00:01:23.516084 1870087 request.go:628] Waited for 193.312152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0718 00:01:23.516147 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0718 00:01:23.516154 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:23.516164 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:23.516171 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:23.519041 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:23.519066 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:23.519075 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:23 GMT
	I0718 00:01:23.519082 1870087 round_trippers.go:580]     Audit-Id: 1fcb47bb-1660-4902-a331-23c05256e388
	I0718 00:01:23.519089 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:23.519098 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:23.519105 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:23.519113 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:23.519123 1870087 round_trippers.go:580]     Content-Length: 261
	I0718 00:01:23.519151 1870087 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4b7cb092-ddad-4d0f-b743-13b4791b5d11","resourceVersion":"359","creationTimestamp":"2023-07-18T00:00:49Z"}}]}
	I0718 00:01:23.519356 1870087 default_sa.go:45] found service account: "default"
	I0718 00:01:23.519376 1870087 default_sa.go:55] duration metric: took 196.680922ms for default service account to be created ...
	I0718 00:01:23.519386 1870087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 00:01:23.715654 1870087 request.go:628] Waited for 196.161625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0718 00:01:23.715717 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0718 00:01:23.715725 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:23.715735 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:23.715747 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:23.719569 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:23.719595 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:23.719606 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:23.719614 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:23 GMT
	I0718 00:01:23.719621 1870087 round_trippers.go:580]     Audit-Id: 030f1b01-36d0-4fd9-ac16-8ac19a644837
	I0718 00:01:23.719629 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:23.719636 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:23.719642 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:23.719980 1870087 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"455","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0718 00:01:23.722395 1870087 system_pods.go:86] 8 kube-system pods found
	I0718 00:01:23.722445 1870087 system_pods.go:89] "coredns-5d78c9869d-qvgbw" [9d2a4d36-002a-4117-b0ec-2c58b2b7249b] Running
	I0718 00:01:23.722453 1870087 system_pods.go:89] "etcd-multinode-451668" [ff35a53d-a680-4948-89ae-4b41390d5766] Running
	I0718 00:01:23.722458 1870087 system_pods.go:89] "kindnet-jcxjg" [99d30dc5-6047-4fb1-abd0-ddb9c8729969] Running
	I0718 00:01:23.722463 1870087 system_pods.go:89] "kube-apiserver-multinode-451668" [67421618-9334-4da3-b70c-4df5028a3e13] Running
	I0718 00:01:23.722474 1870087 system_pods.go:89] "kube-controller-manager-multinode-451668" [873eb02d-decd-42d6-a94b-e93f4248f3b8] Running
	I0718 00:01:23.722483 1870087 system_pods.go:89] "kube-proxy-7knpj" [e6cebdce-80d9-4b8b-8ea5-415bb18d1f07] Running
	I0718 00:01:23.722488 1870087 system_pods.go:89] "kube-scheduler-multinode-451668" [be313f6d-3c25-4ace-a780-aa89145c91c2] Running
	I0718 00:01:23.722500 1870087 system_pods.go:89] "storage-provisioner" [e1ba839b-7dba-4b50-9c64-851459ea7287] Running
	I0718 00:01:23.722506 1870087 system_pods.go:126] duration metric: took 203.115943ms to wait for k8s-apps to be running ...
	I0718 00:01:23.722517 1870087 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 00:01:23.722573 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 00:01:23.736845 1870087 system_svc.go:56] duration metric: took 14.320078ms WaitForService to wait for kubelet.
	I0718 00:01:23.736871 1870087 kubeadm.go:581] duration metric: took 33.446326632s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0718 00:01:23.736915 1870087 node_conditions.go:102] verifying NodePressure condition ...
	I0718 00:01:23.915253 1870087 request.go:628] Waited for 178.259996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0718 00:01:23.915323 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0718 00:01:23.915339 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:23.915349 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:23.915357 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:23.917997 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:23.918074 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:23.918096 1870087 round_trippers.go:580]     Audit-Id: 66812be9-bdae-4f9b-841f-d7abf574288e
	I0718 00:01:23.918118 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:23.918152 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:23.918204 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:23.918218 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:23.918227 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:23 GMT
	I0718 00:01:23.918337 1870087 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0718 00:01:23.918818 1870087 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0718 00:01:23.918844 1870087 node_conditions.go:123] node cpu capacity is 2
	I0718 00:01:23.918857 1870087 node_conditions.go:105] duration metric: took 181.932024ms to run NodePressure ...
	I0718 00:01:23.918873 1870087 start.go:228] waiting for startup goroutines ...
	I0718 00:01:23.918884 1870087 start.go:233] waiting for cluster config update ...
	I0718 00:01:23.918894 1870087 start.go:242] writing updated cluster config ...
	I0718 00:01:23.921416 1870087 out.go:177] 
	I0718 00:01:23.923595 1870087 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:01:23.923705 1870087 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/config.json ...
	I0718 00:01:23.925853 1870087 out.go:177] * Starting worker node multinode-451668-m02 in cluster multinode-451668
	I0718 00:01:23.927625 1870087 cache.go:122] Beginning downloading kic base image for docker with crio
	I0718 00:01:23.929516 1870087 out.go:177] * Pulling base image ...
	I0718 00:01:23.932269 1870087 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0718 00:01:23.932302 1870087 cache.go:57] Caching tarball of preloaded images
	I0718 00:01:23.932378 1870087 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0718 00:01:23.932413 1870087 preload.go:174] Found /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0718 00:01:23.932431 1870087 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0718 00:01:23.932538 1870087 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/config.json ...
	I0718 00:01:23.950672 1870087 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0718 00:01:23.950696 1870087 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0718 00:01:23.950718 1870087 cache.go:195] Successfully downloaded all kic artifacts
	I0718 00:01:23.950748 1870087 start.go:365] acquiring machines lock for multinode-451668-m02: {Name:mke026fff85f37200fe67a681a179e13e351e865 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:01:23.950879 1870087 start.go:369] acquired machines lock for "multinode-451668-m02" in 107.953µs
	I0718 00:01:23.950912 1870087 start.go:93] Provisioning new machine with config: &{Name:multinode-451668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0718 00:01:23.951011 1870087 start.go:125] createHost starting for "m02" (driver="docker")
	I0718 00:01:23.953565 1870087 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 00:01:23.953683 1870087 start.go:159] libmachine.API.Create for "multinode-451668" (driver="docker")
	I0718 00:01:23.953721 1870087 client.go:168] LocalClient.Create starting
	I0718 00:01:23.953812 1870087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem
	I0718 00:01:23.953851 1870087 main.go:141] libmachine: Decoding PEM data...
	I0718 00:01:23.953873 1870087 main.go:141] libmachine: Parsing certificate...
	I0718 00:01:23.953931 1870087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem
	I0718 00:01:23.953957 1870087 main.go:141] libmachine: Decoding PEM data...
	I0718 00:01:23.953972 1870087 main.go:141] libmachine: Parsing certificate...
	I0718 00:01:23.954217 1870087 cli_runner.go:164] Run: docker network inspect multinode-451668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 00:01:23.971657 1870087 network_create.go:76] Found existing network {name:multinode-451668 subnet:0x4000d41890 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0718 00:01:23.971713 1870087 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-451668-m02" container
	I0718 00:01:23.971795 1870087 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 00:01:23.993436 1870087 cli_runner.go:164] Run: docker volume create multinode-451668-m02 --label name.minikube.sigs.k8s.io=multinode-451668-m02 --label created_by.minikube.sigs.k8s.io=true
	I0718 00:01:24.016097 1870087 oci.go:103] Successfully created a docker volume multinode-451668-m02
	I0718 00:01:24.016203 1870087 cli_runner.go:164] Run: docker run --rm --name multinode-451668-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-451668-m02 --entrypoint /usr/bin/test -v multinode-451668-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0718 00:01:24.610324 1870087 oci.go:107] Successfully prepared a docker volume multinode-451668-m02
	I0718 00:01:24.610368 1870087 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0718 00:01:24.610387 1870087 kic.go:190] Starting extracting preloaded images to volume ...
	I0718 00:01:24.610510 1870087 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-451668-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 00:01:28.779589 1870087 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-451668-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.169033555s)
	I0718 00:01:28.779624 1870087 kic.go:199] duration metric: took 4.169233 seconds to extract preloaded images to volume
	W0718 00:01:28.779761 1870087 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0718 00:01:28.779886 1870087 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0718 00:01:28.851327 1870087 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-451668-m02 --name multinode-451668-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-451668-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-451668-m02 --network multinode-451668 --ip 192.168.58.3 --volume multinode-451668-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0718 00:01:29.205586 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668-m02 --format={{.State.Running}}
	I0718 00:01:29.238742 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668-m02 --format={{.State.Status}}
	I0718 00:01:29.260015 1870087 cli_runner.go:164] Run: docker exec multinode-451668-m02 stat /var/lib/dpkg/alternatives/iptables
	I0718 00:01:29.347336 1870087 oci.go:144] the created container "multinode-451668-m02" has a running status.
	I0718 00:01:29.347367 1870087 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa...
	I0718 00:01:30.600380 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0718 00:01:30.600431 1870087 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0718 00:01:30.622789 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668-m02 --format={{.State.Status}}
	I0718 00:01:30.641902 1870087 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0718 00:01:30.641927 1870087 kic_runner.go:114] Args: [docker exec --privileged multinode-451668-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0718 00:01:30.705144 1870087 cli_runner.go:164] Run: docker container inspect multinode-451668-m02 --format={{.State.Status}}
	I0718 00:01:30.725000 1870087 machine.go:88] provisioning docker machine ...
	I0718 00:01:30.725029 1870087 ubuntu.go:169] provisioning hostname "multinode-451668-m02"
	I0718 00:01:30.725096 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:30.743771 1870087 main.go:141] libmachine: Using SSH client type: native
	I0718 00:01:30.744256 1870087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34743 <nil> <nil>}
	I0718 00:01:30.744275 1870087 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-451668-m02 && echo "multinode-451668-m02" | sudo tee /etc/hostname
	I0718 00:01:30.889323 1870087 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-451668-m02
	
	I0718 00:01:30.889405 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:30.908813 1870087 main.go:141] libmachine: Using SSH client type: native
	I0718 00:01:30.909251 1870087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34743 <nil> <nil>}
	I0718 00:01:30.909269 1870087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-451668-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-451668-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-451668-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 00:01:31.039910 1870087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 00:01:31.039936 1870087 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0718 00:01:31.039955 1870087 ubuntu.go:177] setting up certificates
	I0718 00:01:31.039965 1870087 provision.go:83] configureAuth start
	I0718 00:01:31.040030 1870087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668-m02
	I0718 00:01:31.060395 1870087 provision.go:138] copyHostCerts
	I0718 00:01:31.060439 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:01:31.060478 1870087 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0718 00:01:31.060491 1870087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:01:31.060570 1870087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0718 00:01:31.060659 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:01:31.060681 1870087 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0718 00:01:31.060686 1870087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:01:31.060714 1870087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0718 00:01:31.060761 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:01:31.060783 1870087 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0718 00:01:31.060795 1870087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:01:31.060824 1870087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0718 00:01:31.060894 1870087 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.multinode-451668-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-451668-m02]
	I0718 00:01:31.774984 1870087 provision.go:172] copyRemoteCerts
	I0718 00:01:31.775054 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 00:01:31.775102 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:31.793913 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34743 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa Username:docker}
	I0718 00:01:31.889274 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 00:01:31.889336 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 00:01:31.918097 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 00:01:31.918155 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0718 00:01:31.947026 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 00:01:31.947087 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 00:01:31.975724 1870087 provision.go:86] duration metric: configureAuth took 935.745768ms
	I0718 00:01:31.975749 1870087 ubuntu.go:193] setting minikube options for container-runtime
	I0718 00:01:31.975947 1870087 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:01:31.976061 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:31.994168 1870087 main.go:141] libmachine: Using SSH client type: native
	I0718 00:01:31.994635 1870087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34743 <nil> <nil>}
	I0718 00:01:31.994657 1870087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0718 00:01:32.248650 1870087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0718 00:01:32.248670 1870087 machine.go:91] provisioned docker machine in 1.52365185s
	I0718 00:01:32.248679 1870087 client.go:171] LocalClient.Create took 8.294947723s
	I0718 00:01:32.248691 1870087 start.go:167] duration metric: libmachine.API.Create for "multinode-451668" took 8.295008958s
	I0718 00:01:32.248699 1870087 start.go:300] post-start starting for "multinode-451668-m02" (driver="docker")
	I0718 00:01:32.248711 1870087 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 00:01:32.248776 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 00:01:32.248827 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:32.268360 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34743 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa Username:docker}
	I0718 00:01:32.365878 1870087 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 00:01:32.370060 1870087 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0718 00:01:32.370081 1870087 command_runner.go:130] > NAME="Ubuntu"
	I0718 00:01:32.370089 1870087 command_runner.go:130] > VERSION_ID="22.04"
	I0718 00:01:32.370099 1870087 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0718 00:01:32.370105 1870087 command_runner.go:130] > VERSION_CODENAME=jammy
	I0718 00:01:32.370110 1870087 command_runner.go:130] > ID=ubuntu
	I0718 00:01:32.370115 1870087 command_runner.go:130] > ID_LIKE=debian
	I0718 00:01:32.370122 1870087 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0718 00:01:32.370127 1870087 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0718 00:01:32.370135 1870087 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0718 00:01:32.370146 1870087 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0718 00:01:32.370151 1870087 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0718 00:01:32.370208 1870087 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 00:01:32.370238 1870087 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 00:01:32.370257 1870087 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 00:01:32.370264 1870087 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0718 00:01:32.370277 1870087 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0718 00:01:32.370340 1870087 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0718 00:01:32.370468 1870087 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0718 00:01:32.370479 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> /etc/ssl/certs/18062262.pem
	I0718 00:01:32.370586 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 00:01:32.382093 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:01:32.412195 1870087 start.go:303] post-start completed in 163.478663ms
	I0718 00:01:32.412598 1870087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668-m02
	I0718 00:01:32.431500 1870087 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/config.json ...
	I0718 00:01:32.431799 1870087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:01:32.431857 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:32.450690 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34743 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa Username:docker}
	I0718 00:01:32.544965 1870087 command_runner.go:130] > 10%!
	(MISSING)I0718 00:01:32.545041 1870087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 00:01:32.550920 1870087 command_runner.go:130] > 175G
	I0718 00:01:32.550952 1870087 start.go:128] duration metric: createHost completed in 8.599932264s
	I0718 00:01:32.550962 1870087 start.go:83] releasing machines lock for "multinode-451668-m02", held for 8.600070519s
	I0718 00:01:32.551039 1870087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668-m02
	I0718 00:01:32.571770 1870087 out.go:177] * Found network options:
	I0718 00:01:32.573520 1870087 out.go:177]   - NO_PROXY=192.168.58.2
	W0718 00:01:32.575280 1870087 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 00:01:32.575320 1870087 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 00:01:32.575387 1870087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0718 00:01:32.575433 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:32.575714 1870087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 00:01:32.575775 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:01:32.600695 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34743 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa Username:docker}
	I0718 00:01:32.612019 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34743 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa Username:docker}
	I0718 00:01:32.834784 1870087 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0718 00:01:32.876461 1870087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 00:01:32.882194 1870087 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0718 00:01:32.882263 1870087 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0718 00:01:32.882277 1870087 command_runner.go:130] > Device: b3h/179d	Inode: 2078873     Links: 1
	I0718 00:01:32.882286 1870087 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 00:01:32.882293 1870087 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0718 00:01:32.882299 1870087 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0718 00:01:32.882305 1870087 command_runner.go:130] > Change: 2023-07-17 23:37:43.440241036 +0000
	I0718 00:01:32.882311 1870087 command_runner.go:130] >  Birth: 2023-07-17 23:37:43.440241036 +0000
	I0718 00:01:32.882390 1870087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:01:32.905014 1870087 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0718 00:01:32.905116 1870087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:01:32.947246 1870087 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0718 00:01:32.947277 1870087 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0718 00:01:32.947285 1870087 start.go:466] detecting cgroup driver to use...
	I0718 00:01:32.947350 1870087 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0718 00:01:32.947433 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 00:01:32.967737 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 00:01:32.981389 1870087 docker.go:196] disabling cri-docker service (if available) ...
	I0718 00:01:32.981514 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0718 00:01:32.998828 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0718 00:01:33.017242 1870087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0718 00:01:33.131680 1870087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0718 00:01:33.244189 1870087 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0718 00:01:33.244253 1870087 docker.go:212] disabling docker service ...
	I0718 00:01:33.244323 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0718 00:01:33.268725 1870087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0718 00:01:33.283632 1870087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0718 00:01:33.385378 1870087 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0718 00:01:33.385486 1870087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0718 00:01:33.488281 1870087 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0718 00:01:33.488419 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0718 00:01:33.508594 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 00:01:33.527669 1870087 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0718 00:01:33.529470 1870087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0718 00:01:33.529546 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:01:33.544322 1870087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0718 00:01:33.544395 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:01:33.556783 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:01:33.568817 1870087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:01:33.581329 1870087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 00:01:33.592712 1870087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 00:01:33.602193 1870087 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0718 00:01:33.603490 1870087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 00:01:33.614343 1870087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 00:01:33.704126 1870087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0718 00:01:33.831041 1870087 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0718 00:01:33.831139 1870087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0718 00:01:33.836072 1870087 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0718 00:01:33.836095 1870087 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0718 00:01:33.836103 1870087 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0718 00:01:33.836111 1870087 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 00:01:33.836140 1870087 command_runner.go:130] > Access: 2023-07-18 00:01:33.817891700 +0000
	I0718 00:01:33.836157 1870087 command_runner.go:130] > Modify: 2023-07-18 00:01:33.817891700 +0000
	I0718 00:01:33.836166 1870087 command_runner.go:130] > Change: 2023-07-18 00:01:33.817891700 +0000
	I0718 00:01:33.836180 1870087 command_runner.go:130] >  Birth: -
	I0718 00:01:33.836218 1870087 start.go:534] Will wait 60s for crictl version
	I0718 00:01:33.836299 1870087 ssh_runner.go:195] Run: which crictl
	I0718 00:01:33.840859 1870087 command_runner.go:130] > /usr/bin/crictl
	I0718 00:01:33.841203 1870087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 00:01:33.882123 1870087 command_runner.go:130] > Version:  0.1.0
	I0718 00:01:33.882190 1870087 command_runner.go:130] > RuntimeName:  cri-o
	I0718 00:01:33.882212 1870087 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0718 00:01:33.882233 1870087 command_runner.go:130] > RuntimeApiVersion:  v1
	I0718 00:01:33.884980 1870087 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0718 00:01:33.885105 1870087 ssh_runner.go:195] Run: crio --version
	I0718 00:01:33.926012 1870087 command_runner.go:130] > crio version 1.24.6
	I0718 00:01:33.926068 1870087 command_runner.go:130] > Version:          1.24.6
	I0718 00:01:33.926094 1870087 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0718 00:01:33.926115 1870087 command_runner.go:130] > GitTreeState:     clean
	I0718 00:01:33.926142 1870087 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0718 00:01:33.926162 1870087 command_runner.go:130] > GoVersion:        go1.18.2
	I0718 00:01:33.926181 1870087 command_runner.go:130] > Compiler:         gc
	I0718 00:01:33.926201 1870087 command_runner.go:130] > Platform:         linux/arm64
	I0718 00:01:33.926222 1870087 command_runner.go:130] > Linkmode:         dynamic
	I0718 00:01:33.926246 1870087 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0718 00:01:33.926267 1870087 command_runner.go:130] > SeccompEnabled:   true
	I0718 00:01:33.926285 1870087 command_runner.go:130] > AppArmorEnabled:  false
	I0718 00:01:33.928468 1870087 ssh_runner.go:195] Run: crio --version
	I0718 00:01:33.976202 1870087 command_runner.go:130] > crio version 1.24.6
	I0718 00:01:33.976226 1870087 command_runner.go:130] > Version:          1.24.6
	I0718 00:01:33.976237 1870087 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0718 00:01:33.976242 1870087 command_runner.go:130] > GitTreeState:     clean
	I0718 00:01:33.976250 1870087 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0718 00:01:33.976264 1870087 command_runner.go:130] > GoVersion:        go1.18.2
	I0718 00:01:33.976272 1870087 command_runner.go:130] > Compiler:         gc
	I0718 00:01:33.976278 1870087 command_runner.go:130] > Platform:         linux/arm64
	I0718 00:01:33.976294 1870087 command_runner.go:130] > Linkmode:         dynamic
	I0718 00:01:33.976303 1870087 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0718 00:01:33.976311 1870087 command_runner.go:130] > SeccompEnabled:   true
	I0718 00:01:33.976316 1870087 command_runner.go:130] > AppArmorEnabled:  false
	I0718 00:01:33.981665 1870087 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0718 00:01:33.983752 1870087 out.go:177]   - env NO_PROXY=192.168.58.2
	I0718 00:01:33.985501 1870087 cli_runner.go:164] Run: docker network inspect multinode-451668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 00:01:34.004784 1870087 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0718 00:01:34.009647 1870087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 00:01:34.024054 1870087 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668 for IP: 192.168.58.3
	I0718 00:01:34.024084 1870087 certs.go:190] acquiring lock for shared ca certs: {Name:mkb76b85951e1a7e4a78939a9bc1392aa19273b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 00:01:34.024225 1870087 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key
	I0718 00:01:34.024268 1870087 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key
	I0718 00:01:34.024279 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 00:01:34.024293 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 00:01:34.024303 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 00:01:34.024316 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 00:01:34.024370 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem (1338 bytes)
	W0718 00:01:34.024399 1870087 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226_empty.pem, impossibly tiny 0 bytes
	I0718 00:01:34.024408 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 00:01:34.024434 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem (1082 bytes)
	I0718 00:01:34.024457 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem (1123 bytes)
	I0718 00:01:34.024481 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem (1675 bytes)
	I0718 00:01:34.024528 1870087 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:01:34.024554 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> /usr/share/ca-certificates/18062262.pem
	I0718 00:01:34.024566 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:01:34.024596 1870087 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem -> /usr/share/ca-certificates/1806226.pem
	I0718 00:01:34.024922 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 00:01:34.056254 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0718 00:01:34.084768 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 00:01:34.113952 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0718 00:01:34.143142 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /usr/share/ca-certificates/18062262.pem (1708 bytes)
	I0718 00:01:34.172656 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 00:01:34.201700 1870087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/1806226.pem --> /usr/share/ca-certificates/1806226.pem (1338 bytes)
	I0718 00:01:34.233977 1870087 ssh_runner.go:195] Run: openssl version
	I0718 00:01:34.240652 1870087 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0718 00:01:34.241213 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1806226.pem && ln -fs /usr/share/ca-certificates/1806226.pem /etc/ssl/certs/1806226.pem"
	I0718 00:01:34.253226 1870087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1806226.pem
	I0718 00:01:34.258839 1870087 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 23:44 /usr/share/ca-certificates/1806226.pem
	I0718 00:01:34.258873 1870087 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 23:44 /usr/share/ca-certificates/1806226.pem
	I0718 00:01:34.258941 1870087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1806226.pem
	I0718 00:01:34.267369 1870087 command_runner.go:130] > 51391683
	I0718 00:01:34.267822 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1806226.pem /etc/ssl/certs/51391683.0"
	I0718 00:01:34.280040 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18062262.pem && ln -fs /usr/share/ca-certificates/18062262.pem /etc/ssl/certs/18062262.pem"
	I0718 00:01:34.291881 1870087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18062262.pem
	I0718 00:01:34.296382 1870087 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 23:44 /usr/share/ca-certificates/18062262.pem
	I0718 00:01:34.296633 1870087 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 23:44 /usr/share/ca-certificates/18062262.pem
	I0718 00:01:34.296716 1870087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18062262.pem
	I0718 00:01:34.305220 1870087 command_runner.go:130] > 3ec20f2e
	I0718 00:01:34.305303 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18062262.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 00:01:34.317778 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 00:01:34.329918 1870087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:01:34.334743 1870087 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:01:34.334984 1870087 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:01:34.335052 1870087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 00:01:34.344473 1870087 command_runner.go:130] > b5213941
	I0718 00:01:34.344916 1870087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 00:01:34.357156 1870087 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0718 00:01:34.361785 1870087 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0718 00:01:34.361822 1870087 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0718 00:01:34.361919 1870087 ssh_runner.go:195] Run: crio config
	I0718 00:01:34.416528 1870087 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0718 00:01:34.416553 1870087 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0718 00:01:34.416562 1870087 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0718 00:01:34.416566 1870087 command_runner.go:130] > #
	I0718 00:01:34.416577 1870087 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0718 00:01:34.416585 1870087 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0718 00:01:34.416596 1870087 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0718 00:01:34.416610 1870087 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0718 00:01:34.416615 1870087 command_runner.go:130] > # reload'.
	I0718 00:01:34.416625 1870087 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0718 00:01:34.416633 1870087 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0718 00:01:34.416644 1870087 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0718 00:01:34.416652 1870087 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0718 00:01:34.416656 1870087 command_runner.go:130] > [crio]
	I0718 00:01:34.416666 1870087 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0718 00:01:34.416673 1870087 command_runner.go:130] > # containers images, in this directory.
	I0718 00:01:34.416683 1870087 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0718 00:01:34.416695 1870087 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0718 00:01:34.416702 1870087 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0718 00:01:34.416713 1870087 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0718 00:01:34.416720 1870087 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0718 00:01:34.416726 1870087 command_runner.go:130] > # storage_driver = "vfs"
	I0718 00:01:34.416733 1870087 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0718 00:01:34.416743 1870087 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0718 00:01:34.416749 1870087 command_runner.go:130] > # storage_option = [
	I0718 00:01:34.416753 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.416761 1870087 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0718 00:01:34.416771 1870087 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0718 00:01:34.416777 1870087 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0718 00:01:34.416786 1870087 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0718 00:01:34.416796 1870087 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0718 00:01:34.416812 1870087 command_runner.go:130] > # always happen on a node reboot
	I0718 00:01:34.416820 1870087 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0718 00:01:34.416828 1870087 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0718 00:01:34.416838 1870087 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0718 00:01:34.416851 1870087 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0718 00:01:34.416858 1870087 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0718 00:01:34.416870 1870087 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0718 00:01:34.416883 1870087 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0718 00:01:34.416891 1870087 command_runner.go:130] > # internal_wipe = true
	I0718 00:01:34.416897 1870087 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0718 00:01:34.416905 1870087 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0718 00:01:34.416912 1870087 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0718 00:01:34.416922 1870087 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0718 00:01:34.416929 1870087 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0718 00:01:34.416936 1870087 command_runner.go:130] > [crio.api]
	I0718 00:01:34.416945 1870087 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0718 00:01:34.416953 1870087 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0718 00:01:34.416962 1870087 command_runner.go:130] > # IP address on which the stream server will listen.
	I0718 00:01:34.416967 1870087 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0718 00:01:34.416977 1870087 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0718 00:01:34.416984 1870087 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0718 00:01:34.416991 1870087 command_runner.go:130] > # stream_port = "0"
	I0718 00:01:34.416998 1870087 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0718 00:01:34.417003 1870087 command_runner.go:130] > # stream_enable_tls = false
	I0718 00:01:34.417011 1870087 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0718 00:01:34.417019 1870087 command_runner.go:130] > # stream_idle_timeout = ""
	I0718 00:01:34.417026 1870087 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0718 00:01:34.417034 1870087 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0718 00:01:34.417039 1870087 command_runner.go:130] > # minutes.
	I0718 00:01:34.417047 1870087 command_runner.go:130] > # stream_tls_cert = ""
	I0718 00:01:34.417055 1870087 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0718 00:01:34.417065 1870087 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0718 00:01:34.417070 1870087 command_runner.go:130] > # stream_tls_key = ""
	I0718 00:01:34.417077 1870087 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0718 00:01:34.417085 1870087 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0718 00:01:34.417094 1870087 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0718 00:01:34.417099 1870087 command_runner.go:130] > # stream_tls_ca = ""
	I0718 00:01:34.417109 1870087 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0718 00:01:34.417118 1870087 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0718 00:01:34.417127 1870087 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0718 00:01:34.417133 1870087 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0718 00:01:34.417166 1870087 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0718 00:01:34.417178 1870087 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0718 00:01:34.417183 1870087 command_runner.go:130] > [crio.runtime]
	I0718 00:01:34.417190 1870087 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0718 00:01:34.417197 1870087 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0718 00:01:34.417205 1870087 command_runner.go:130] > # "nofile=1024:2048"
	I0718 00:01:34.417213 1870087 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0718 00:01:34.417218 1870087 command_runner.go:130] > # default_ulimits = [
	I0718 00:01:34.417223 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.417231 1870087 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0718 00:01:34.417238 1870087 command_runner.go:130] > # no_pivot = false
	I0718 00:01:34.417245 1870087 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0718 00:01:34.417255 1870087 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0718 00:01:34.417262 1870087 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0718 00:01:34.417272 1870087 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0718 00:01:34.417279 1870087 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0718 00:01:34.417287 1870087 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0718 00:01:34.417295 1870087 command_runner.go:130] > # conmon = ""
	I0718 00:01:34.417300 1870087 command_runner.go:130] > # Cgroup setting for conmon
	I0718 00:01:34.417308 1870087 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0718 00:01:34.417313 1870087 command_runner.go:130] > conmon_cgroup = "pod"
	I0718 00:01:34.417321 1870087 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0718 00:01:34.417330 1870087 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0718 00:01:34.417338 1870087 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0718 00:01:34.417346 1870087 command_runner.go:130] > # conmon_env = [
	I0718 00:01:34.417350 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.417357 1870087 command_runner.go:130] > # Additional environment variables to set for all the
	I0718 00:01:34.417363 1870087 command_runner.go:130] > # containers. These are overridden if set in the
	I0718 00:01:34.417371 1870087 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0718 00:01:34.417378 1870087 command_runner.go:130] > # default_env = [
	I0718 00:01:34.417383 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.417391 1870087 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0718 00:01:34.417398 1870087 command_runner.go:130] > # selinux = false
	I0718 00:01:34.417408 1870087 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0718 00:01:34.417418 1870087 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0718 00:01:34.417427 1870087 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0718 00:01:34.417432 1870087 command_runner.go:130] > # seccomp_profile = ""
	I0718 00:01:34.417442 1870087 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0718 00:01:34.417449 1870087 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0718 00:01:34.417467 1870087 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0718 00:01:34.417476 1870087 command_runner.go:130] > # which might increase security.
	I0718 00:01:34.417482 1870087 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0718 00:01:34.417489 1870087 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0718 00:01:34.417498 1870087 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0718 00:01:34.417509 1870087 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0718 00:01:34.417517 1870087 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0718 00:01:34.417528 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:01:34.417534 1870087 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0718 00:01:34.417542 1870087 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0718 00:01:34.417552 1870087 command_runner.go:130] > # the cgroup blockio controller.
	I0718 00:01:34.417557 1870087 command_runner.go:130] > # blockio_config_file = ""
	I0718 00:01:34.417565 1870087 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0718 00:01:34.417573 1870087 command_runner.go:130] > # irqbalance daemon.
	I0718 00:01:34.417580 1870087 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0718 00:01:34.417588 1870087 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0718 00:01:34.417594 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:01:34.417602 1870087 command_runner.go:130] > # rdt_config_file = ""
	I0718 00:01:34.417609 1870087 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0718 00:01:34.417618 1870087 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0718 00:01:34.417626 1870087 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0718 00:01:34.417631 1870087 command_runner.go:130] > # separate_pull_cgroup = ""
	I0718 00:01:34.417639 1870087 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0718 00:01:34.417650 1870087 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0718 00:01:34.417655 1870087 command_runner.go:130] > # will be added.
	I0718 00:01:34.417661 1870087 command_runner.go:130] > # default_capabilities = [
	I0718 00:01:34.417668 1870087 command_runner.go:130] > # 	"CHOWN",
	I0718 00:01:34.417672 1870087 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0718 00:01:34.417677 1870087 command_runner.go:130] > # 	"FSETID",
	I0718 00:01:34.417682 1870087 command_runner.go:130] > # 	"FOWNER",
	I0718 00:01:34.417687 1870087 command_runner.go:130] > # 	"SETGID",
	I0718 00:01:34.417694 1870087 command_runner.go:130] > # 	"SETUID",
	I0718 00:01:34.417700 1870087 command_runner.go:130] > # 	"SETPCAP",
	I0718 00:01:34.417707 1870087 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0718 00:01:34.417712 1870087 command_runner.go:130] > # 	"KILL",
	I0718 00:01:34.419127 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.419152 1870087 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0718 00:01:34.419163 1870087 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0718 00:01:34.419169 1870087 command_runner.go:130] > # add_inheritable_capabilities = true
	I0718 00:01:34.419179 1870087 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0718 00:01:34.419190 1870087 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0718 00:01:34.419196 1870087 command_runner.go:130] > # default_sysctls = [
	I0718 00:01:34.419200 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.419207 1870087 command_runner.go:130] > # List of devices on the host that a
	I0718 00:01:34.419216 1870087 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0718 00:01:34.419223 1870087 command_runner.go:130] > # allowed_devices = [
	I0718 00:01:34.419230 1870087 command_runner.go:130] > # 	"/dev/fuse",
	I0718 00:01:34.419235 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.419245 1870087 command_runner.go:130] > # List of additional devices. specified as
	I0718 00:01:34.419261 1870087 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0718 00:01:34.419272 1870087 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0718 00:01:34.419280 1870087 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0718 00:01:34.419285 1870087 command_runner.go:130] > # additional_devices = [
	I0718 00:01:34.419291 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.419298 1870087 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0718 00:01:34.419303 1870087 command_runner.go:130] > # cdi_spec_dirs = [
	I0718 00:01:34.419313 1870087 command_runner.go:130] > # 	"/etc/cdi",
	I0718 00:01:34.419318 1870087 command_runner.go:130] > # 	"/var/run/cdi",
	I0718 00:01:34.419324 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.419332 1870087 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0718 00:01:34.419345 1870087 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0718 00:01:34.419350 1870087 command_runner.go:130] > # Defaults to false.
	I0718 00:01:34.419356 1870087 command_runner.go:130] > # device_ownership_from_security_context = false
	I0718 00:01:34.419366 1870087 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0718 00:01:34.419376 1870087 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0718 00:01:34.419382 1870087 command_runner.go:130] > # hooks_dir = [
	I0718 00:01:34.419387 1870087 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0718 00:01:34.419392 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.419401 1870087 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0718 00:01:34.419414 1870087 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0718 00:01:34.419421 1870087 command_runner.go:130] > # its default mounts from the following two files:
	I0718 00:01:34.419425 1870087 command_runner.go:130] > #
	I0718 00:01:34.419433 1870087 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0718 00:01:34.419443 1870087 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0718 00:01:34.419450 1870087 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0718 00:01:34.419461 1870087 command_runner.go:130] > #
	I0718 00:01:34.419469 1870087 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0718 00:01:34.419477 1870087 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0718 00:01:34.419487 1870087 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0718 00:01:34.419494 1870087 command_runner.go:130] > #      only add mounts it finds in this file.
	I0718 00:01:34.419500 1870087 command_runner.go:130] > #
	I0718 00:01:34.419506 1870087 command_runner.go:130] > # default_mounts_file = ""
	I0718 00:01:34.419515 1870087 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0718 00:01:34.419523 1870087 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0718 00:01:34.419528 1870087 command_runner.go:130] > # pids_limit = 0
	I0718 00:01:34.419537 1870087 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0718 00:01:34.419547 1870087 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0718 00:01:34.419555 1870087 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0718 00:01:34.419565 1870087 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0718 00:01:34.419571 1870087 command_runner.go:130] > # log_size_max = -1
	I0718 00:01:34.419580 1870087 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0718 00:01:34.419588 1870087 command_runner.go:130] > # log_to_journald = false
	I0718 00:01:34.419595 1870087 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0718 00:01:34.419601 1870087 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0718 00:01:34.419608 1870087 command_runner.go:130] > # Path to directory for container attach sockets.
	I0718 00:01:34.419617 1870087 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0718 00:01:34.419625 1870087 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0718 00:01:34.419638 1870087 command_runner.go:130] > # bind_mount_prefix = ""
	I0718 00:01:34.419645 1870087 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0718 00:01:34.419650 1870087 command_runner.go:130] > # read_only = false
	I0718 00:01:34.419658 1870087 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0718 00:01:34.419676 1870087 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0718 00:01:34.419681 1870087 command_runner.go:130] > # live configuration reload.
	I0718 00:01:34.419687 1870087 command_runner.go:130] > # log_level = "info"
	I0718 00:01:34.419694 1870087 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0718 00:01:34.419703 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:01:34.419707 1870087 command_runner.go:130] > # log_filter = ""
	I0718 00:01:34.419715 1870087 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0718 00:01:34.419725 1870087 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0718 00:01:34.419730 1870087 command_runner.go:130] > # separated by comma.
	I0718 00:01:34.419736 1870087 command_runner.go:130] > # uid_mappings = ""
	I0718 00:01:34.419744 1870087 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0718 00:01:34.419753 1870087 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0718 00:01:34.419762 1870087 command_runner.go:130] > # separated by comma.
	I0718 00:01:34.419767 1870087 command_runner.go:130] > # gid_mappings = ""
	I0718 00:01:34.419777 1870087 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0718 00:01:34.419785 1870087 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0718 00:01:34.419811 1870087 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0718 00:01:34.419821 1870087 command_runner.go:130] > # minimum_mappable_uid = -1
	I0718 00:01:34.419828 1870087 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0718 00:01:34.419835 1870087 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0718 00:01:34.419845 1870087 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0718 00:01:34.419855 1870087 command_runner.go:130] > # minimum_mappable_gid = -1
	I0718 00:01:34.419863 1870087 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0718 00:01:34.419870 1870087 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0718 00:01:34.419878 1870087 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0718 00:01:34.419887 1870087 command_runner.go:130] > # ctr_stop_timeout = 30
	I0718 00:01:34.419894 1870087 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0718 00:01:34.419902 1870087 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0718 00:01:34.419911 1870087 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0718 00:01:34.419918 1870087 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0718 00:01:34.419923 1870087 command_runner.go:130] > # drop_infra_ctr = true
	I0718 00:01:34.419931 1870087 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0718 00:01:34.419940 1870087 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0718 00:01:34.419949 1870087 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0718 00:01:34.419957 1870087 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0718 00:01:34.419964 1870087 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0718 00:01:34.419970 1870087 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0718 00:01:34.419983 1870087 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0718 00:01:34.419992 1870087 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0718 00:01:34.419999 1870087 command_runner.go:130] > # pinns_path = ""
	I0718 00:01:34.420006 1870087 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0718 00:01:34.420017 1870087 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0718 00:01:34.420026 1870087 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0718 00:01:34.420041 1870087 command_runner.go:130] > # default_runtime = "runc"
	I0718 00:01:34.420048 1870087 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0718 00:01:34.420057 1870087 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0718 00:01:34.420070 1870087 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0718 00:01:34.420076 1870087 command_runner.go:130] > # creation as a file is not desired either.
	I0718 00:01:34.420088 1870087 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0718 00:01:34.420098 1870087 command_runner.go:130] > # the hostname is being managed dynamically.
	I0718 00:01:34.420104 1870087 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0718 00:01:34.420108 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.420116 1870087 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0718 00:01:34.420126 1870087 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0718 00:01:34.420136 1870087 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0718 00:01:34.420144 1870087 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0718 00:01:34.420150 1870087 command_runner.go:130] > #
	I0718 00:01:34.420156 1870087 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0718 00:01:34.420164 1870087 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0718 00:01:34.420171 1870087 command_runner.go:130] > #  runtime_type = "oci"
	I0718 00:01:34.420177 1870087 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0718 00:01:34.420187 1870087 command_runner.go:130] > #  privileged_without_host_devices = false
	I0718 00:01:34.420192 1870087 command_runner.go:130] > #  allowed_annotations = []
	I0718 00:01:34.420197 1870087 command_runner.go:130] > # Where:
	I0718 00:01:34.420204 1870087 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0718 00:01:34.420214 1870087 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0718 00:01:34.420224 1870087 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0718 00:01:34.420236 1870087 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0718 00:01:34.420241 1870087 command_runner.go:130] > #   in $PATH.
	I0718 00:01:34.420248 1870087 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0718 00:01:34.420256 1870087 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0718 00:01:34.420264 1870087 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0718 00:01:34.420271 1870087 command_runner.go:130] > #   state.
	I0718 00:01:34.420279 1870087 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0718 00:01:34.420286 1870087 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0718 00:01:34.420294 1870087 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0718 00:01:34.420303 1870087 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0718 00:01:34.420311 1870087 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0718 00:01:34.420322 1870087 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0718 00:01:34.420328 1870087 command_runner.go:130] > #   The currently recognized values are:
	I0718 00:01:34.420336 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0718 00:01:34.420346 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0718 00:01:34.420356 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0718 00:01:34.420363 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0718 00:01:34.420387 1870087 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0718 00:01:34.420400 1870087 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0718 00:01:34.420408 1870087 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0718 00:01:34.420418 1870087 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0718 00:01:34.420426 1870087 command_runner.go:130] > #   should be moved to the container's cgroup
	I0718 00:01:34.420434 1870087 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0718 00:01:34.420441 1870087 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0718 00:01:34.420448 1870087 command_runner.go:130] > runtime_type = "oci"
	I0718 00:01:34.420453 1870087 command_runner.go:130] > runtime_root = "/run/runc"
	I0718 00:01:34.420459 1870087 command_runner.go:130] > runtime_config_path = ""
	I0718 00:01:34.420464 1870087 command_runner.go:130] > monitor_path = ""
	I0718 00:01:34.420469 1870087 command_runner.go:130] > monitor_cgroup = ""
	I0718 00:01:34.420476 1870087 command_runner.go:130] > monitor_exec_cgroup = ""
	I0718 00:01:34.420493 1870087 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0718 00:01:34.420501 1870087 command_runner.go:130] > # running containers
	I0718 00:01:34.420506 1870087 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0718 00:01:34.420514 1870087 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0718 00:01:34.420522 1870087 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0718 00:01:34.420532 1870087 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0718 00:01:34.420539 1870087 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0718 00:01:34.420547 1870087 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0718 00:01:34.420553 1870087 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0718 00:01:34.420558 1870087 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0718 00:01:34.420564 1870087 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0718 00:01:34.420572 1870087 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0718 00:01:34.420580 1870087 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0718 00:01:34.420589 1870087 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0718 00:01:34.420597 1870087 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0718 00:01:34.420606 1870087 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0718 00:01:34.420619 1870087 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0718 00:01:34.420626 1870087 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0718 00:01:34.420638 1870087 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0718 00:01:34.420650 1870087 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0718 00:01:34.420657 1870087 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0718 00:01:34.420667 1870087 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0718 00:01:34.420674 1870087 command_runner.go:130] > # Example:
	I0718 00:01:34.420680 1870087 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0718 00:01:34.420686 1870087 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0718 00:01:34.420695 1870087 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0718 00:01:34.420702 1870087 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0718 00:01:34.420711 1870087 command_runner.go:130] > # cpuset = 0
	I0718 00:01:34.420716 1870087 command_runner.go:130] > # cpushares = "0-1"
	I0718 00:01:34.420720 1870087 command_runner.go:130] > # Where:
	I0718 00:01:34.420726 1870087 command_runner.go:130] > # The workload name is workload-type.
	I0718 00:01:34.420735 1870087 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0718 00:01:34.420743 1870087 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0718 00:01:34.420753 1870087 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0718 00:01:34.420763 1870087 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0718 00:01:34.420770 1870087 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0718 00:01:34.420778 1870087 command_runner.go:130] > # 
	I0718 00:01:34.420786 1870087 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0718 00:01:34.420790 1870087 command_runner.go:130] > #
	I0718 00:01:34.420799 1870087 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0718 00:01:34.420808 1870087 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0718 00:01:34.420824 1870087 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0718 00:01:34.420833 1870087 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0718 00:01:34.420840 1870087 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0718 00:01:34.420847 1870087 command_runner.go:130] > [crio.image]
	I0718 00:01:34.420855 1870087 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0718 00:01:34.420863 1870087 command_runner.go:130] > # default_transport = "docker://"
	I0718 00:01:34.420871 1870087 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0718 00:01:34.420879 1870087 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0718 00:01:34.420888 1870087 command_runner.go:130] > # global_auth_file = ""
	I0718 00:01:34.420894 1870087 command_runner.go:130] > # The image used to instantiate infra containers.
	I0718 00:01:34.420900 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:01:34.420907 1870087 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0718 00:01:34.420915 1870087 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0718 00:01:34.420925 1870087 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0718 00:01:34.420932 1870087 command_runner.go:130] > # This option supports live configuration reload.
	I0718 00:01:34.420937 1870087 command_runner.go:130] > # pause_image_auth_file = ""
	I0718 00:01:34.420959 1870087 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0718 00:01:34.420983 1870087 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0718 00:01:34.420996 1870087 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0718 00:01:34.421004 1870087 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0718 00:01:34.421010 1870087 command_runner.go:130] > # pause_command = "/pause"
	I0718 00:01:34.421021 1870087 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0718 00:01:34.421032 1870087 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0718 00:01:34.421039 1870087 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0718 00:01:34.421049 1870087 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0718 00:01:34.421057 1870087 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0718 00:01:34.421063 1870087 command_runner.go:130] > # signature_policy = ""
	I0718 00:01:34.421070 1870087 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0718 00:01:34.421080 1870087 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0718 00:01:34.421087 1870087 command_runner.go:130] > # changing them here.
	I0718 00:01:34.421092 1870087 command_runner.go:130] > # insecure_registries = [
	I0718 00:01:34.421099 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.421106 1870087 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0718 00:01:34.421113 1870087 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0718 00:01:34.421122 1870087 command_runner.go:130] > # image_volumes = "mkdir"
	I0718 00:01:34.421129 1870087 command_runner.go:130] > # Temporary directory to use for storing big files
	I0718 00:01:34.421134 1870087 command_runner.go:130] > # big_files_temporary_dir = ""
	I0718 00:01:34.421143 1870087 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0718 00:01:34.421152 1870087 command_runner.go:130] > # CNI plugins.
	I0718 00:01:34.421157 1870087 command_runner.go:130] > [crio.network]
	I0718 00:01:34.421165 1870087 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0718 00:01:34.421171 1870087 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0718 00:01:34.421179 1870087 command_runner.go:130] > # cni_default_network = ""
	I0718 00:01:34.421193 1870087 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0718 00:01:34.421199 1870087 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0718 00:01:34.421207 1870087 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0718 00:01:34.421214 1870087 command_runner.go:130] > # plugin_dirs = [
	I0718 00:01:34.421219 1870087 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0718 00:01:34.421223 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.421230 1870087 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0718 00:01:34.421235 1870087 command_runner.go:130] > [crio.metrics]
	I0718 00:01:34.421244 1870087 command_runner.go:130] > # Globally enable or disable metrics support.
	I0718 00:01:34.421252 1870087 command_runner.go:130] > # enable_metrics = false
	I0718 00:01:34.421258 1870087 command_runner.go:130] > # Specify enabled metrics collectors.
	I0718 00:01:34.421263 1870087 command_runner.go:130] > # Per default all metrics are enabled.
	I0718 00:01:34.421271 1870087 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0718 00:01:34.421283 1870087 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0718 00:01:34.421290 1870087 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0718 00:01:34.421298 1870087 command_runner.go:130] > # metrics_collectors = [
	I0718 00:01:34.421303 1870087 command_runner.go:130] > # 	"operations",
	I0718 00:01:34.421309 1870087 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0718 00:01:34.421315 1870087 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0718 00:01:34.421322 1870087 command_runner.go:130] > # 	"operations_errors",
	I0718 00:01:34.421329 1870087 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0718 00:01:34.421335 1870087 command_runner.go:130] > # 	"image_pulls_by_name",
	I0718 00:01:34.421343 1870087 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0718 00:01:34.421348 1870087 command_runner.go:130] > # 	"image_pulls_failures",
	I0718 00:01:34.421354 1870087 command_runner.go:130] > # 	"image_pulls_successes",
	I0718 00:01:34.421363 1870087 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0718 00:01:34.421368 1870087 command_runner.go:130] > # 	"image_layer_reuse",
	I0718 00:01:34.421373 1870087 command_runner.go:130] > # 	"containers_oom_total",
	I0718 00:01:34.421380 1870087 command_runner.go:130] > # 	"containers_oom",
	I0718 00:01:34.421385 1870087 command_runner.go:130] > # 	"processes_defunct",
	I0718 00:01:34.421391 1870087 command_runner.go:130] > # 	"operations_total",
	I0718 00:01:34.421401 1870087 command_runner.go:130] > # 	"operations_latency_seconds",
	I0718 00:01:34.421407 1870087 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0718 00:01:34.421414 1870087 command_runner.go:130] > # 	"operations_errors_total",
	I0718 00:01:34.421422 1870087 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0718 00:01:34.421428 1870087 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0718 00:01:34.421436 1870087 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0718 00:01:34.421442 1870087 command_runner.go:130] > # 	"image_pulls_success_total",
	I0718 00:01:34.421447 1870087 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0718 00:01:34.421452 1870087 command_runner.go:130] > # 	"containers_oom_count_total",
	I0718 00:01:34.421460 1870087 command_runner.go:130] > # ]
	I0718 00:01:34.421466 1870087 command_runner.go:130] > # The port on which the metrics server will listen.
	I0718 00:01:34.421471 1870087 command_runner.go:130] > # metrics_port = 9090
	I0718 00:01:34.421477 1870087 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0718 00:01:34.421482 1870087 command_runner.go:130] > # metrics_socket = ""
	I0718 00:01:34.421489 1870087 command_runner.go:130] > # The certificate for the secure metrics server.
	I0718 00:01:34.421499 1870087 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0718 00:01:34.421509 1870087 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0718 00:01:34.421515 1870087 command_runner.go:130] > # certificate on any modification event.
	I0718 00:01:34.421521 1870087 command_runner.go:130] > # metrics_cert = ""
	I0718 00:01:34.421529 1870087 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0718 00:01:34.421536 1870087 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0718 00:01:34.421541 1870087 command_runner.go:130] > # metrics_key = ""
	I0718 00:01:34.421550 1870087 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0718 00:01:34.421555 1870087 command_runner.go:130] > [crio.tracing]
	I0718 00:01:34.421562 1870087 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0718 00:01:34.421567 1870087 command_runner.go:130] > # enable_tracing = false
	I0718 00:01:34.421574 1870087 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0718 00:01:34.421582 1870087 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0718 00:01:34.421590 1870087 command_runner.go:130] > # Number of samples to collect per million spans.
	I0718 00:01:34.421596 1870087 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0718 00:01:34.421606 1870087 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0718 00:01:34.421611 1870087 command_runner.go:130] > [crio.stats]
	I0718 00:01:34.421618 1870087 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0718 00:01:34.421625 1870087 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0718 00:01:34.421634 1870087 command_runner.go:130] > # stats_collection_period = 0
	I0718 00:01:34.423802 1870087 command_runner.go:130] ! time="2023-07-18 00:01:34.412871567Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0718 00:01:34.423846 1870087 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0718 00:01:34.423915 1870087 cni.go:84] Creating CNI manager for ""
	I0718 00:01:34.423926 1870087 cni.go:137] 2 nodes found, recommending kindnet
	I0718 00:01:34.423936 1870087 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0718 00:01:34.423974 1870087 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-451668 NodeName:multinode-451668-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 00:01:34.424106 1870087 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-451668-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 00:01:34.424162 1870087 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-451668-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0718 00:01:34.424232 1870087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0718 00:01:34.434503 1870087 command_runner.go:130] > kubeadm
	I0718 00:01:34.434524 1870087 command_runner.go:130] > kubectl
	I0718 00:01:34.434529 1870087 command_runner.go:130] > kubelet
	I0718 00:01:34.435773 1870087 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 00:01:34.435840 1870087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0718 00:01:34.447046 1870087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0718 00:01:34.468826 1870087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 00:01:34.490402 1870087 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0718 00:01:34.495311 1870087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 00:01:34.510668 1870087 host.go:66] Checking if "multinode-451668" exists ...
	I0718 00:01:34.510963 1870087 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:01:34.511203 1870087 start.go:301] JoinCluster: &{Name:multinode-451668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-451668 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:01:34.511292 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 00:01:34.511346 1870087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:01:34.529690 1870087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:01:34.706722 1870087 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3ibep8.nlyfx8g48u9ueet1 --discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f 
	I0718 00:01:34.706766 1870087 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0718 00:01:34.706793 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3ibep8.nlyfx8g48u9ueet1 --discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-451668-m02"
	I0718 00:01:34.748632 1870087 command_runner.go:130] > [preflight] Running pre-flight checks
	I0718 00:01:34.796816 1870087 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0718 00:01:34.796848 1870087 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-aws
	I0718 00:01:34.796861 1870087 command_runner.go:130] > OS: Linux
	I0718 00:01:34.796868 1870087 command_runner.go:130] > CGROUPS_CPU: enabled
	I0718 00:01:34.796876 1870087 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0718 00:01:34.796882 1870087 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0718 00:01:34.796888 1870087 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0718 00:01:34.796898 1870087 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0718 00:01:34.796905 1870087 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0718 00:01:34.796922 1870087 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0718 00:01:34.796932 1870087 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0718 00:01:34.796938 1870087 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0718 00:01:34.909976 1870087 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0718 00:01:34.910001 1870087 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0718 00:01:34.944838 1870087 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 00:01:34.944866 1870087 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 00:01:34.944874 1870087 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0718 00:01:35.046505 1870087 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0718 00:01:38.061145 1870087 command_runner.go:130] > This node has joined the cluster:
	I0718 00:01:38.061178 1870087 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0718 00:01:38.061186 1870087 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0718 00:01:38.061194 1870087 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0718 00:01:38.064731 1870087 command_runner.go:130] ! W0718 00:01:34.748104    1021 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0718 00:01:38.064772 1870087 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0718 00:01:38.064785 1870087 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 00:01:38.064806 1870087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3ibep8.nlyfx8g48u9ueet1 --discovery-token-ca-cert-hash sha256:b5091145d8291edee463dab95a1bdfeb1e97f89842481bec35f68788c073ce7f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-451668-m02": (3.358000259s)
	I0718 00:01:38.064822 1870087 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 00:01:38.340736 1870087 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0718 00:01:38.340775 1870087 start.go:303] JoinCluster complete in 3.829570467s
	I0718 00:01:38.340786 1870087 cni.go:84] Creating CNI manager for ""
	I0718 00:01:38.340792 1870087 cni.go:137] 2 nodes found, recommending kindnet
	I0718 00:01:38.340846 1870087 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 00:01:38.346948 1870087 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0718 00:01:38.346982 1870087 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0718 00:01:38.346992 1870087 command_runner.go:130] > Device: 3ah/58d	Inode: 2083390     Links: 1
	I0718 00:01:38.347000 1870087 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 00:01:38.347009 1870087 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0718 00:01:38.347015 1870087 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0718 00:01:38.347021 1870087 command_runner.go:130] > Change: 2023-07-17 23:37:44.120234222 +0000
	I0718 00:01:38.347031 1870087 command_runner.go:130] >  Birth: 2023-07-17 23:37:44.076234663 +0000
	I0718 00:01:38.347072 1870087 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0718 00:01:38.347087 1870087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 00:01:38.368976 1870087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 00:01:38.867395 1870087 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0718 00:01:38.872012 1870087 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0718 00:01:38.875285 1870087 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0718 00:01:38.889395 1870087 command_runner.go:130] > daemonset.apps/kindnet configured
	I0718 00:01:38.895516 1870087 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:01:38.895790 1870087 kapi.go:59] client config for multinode-451668: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 00:01:38.896103 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0718 00:01:38.896111 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:38.896120 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:38.896128 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:38.898718 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:38.898735 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:38.898744 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:38.898751 1870087 round_trippers.go:580]     Content-Length: 291
	I0718 00:01:38.898758 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:38 GMT
	I0718 00:01:38.898764 1870087 round_trippers.go:580]     Audit-Id: bfa5c62b-a93b-4905-a9f9-31ca815abe4f
	I0718 00:01:38.898771 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:38.898777 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:38.898784 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:38.899000 1870087 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"31e25b05-9d9a-48b7-ba3e-9797c0a06c06","resourceVersion":"459","creationTimestamp":"2023-07-18T00:00:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0718 00:01:38.899089 1870087 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-451668" context rescaled to 1 replicas
	I0718 00:01:38.899112 1870087 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0718 00:01:38.901252 1870087 out.go:177] * Verifying Kubernetes components...
	I0718 00:01:38.903066 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 00:01:38.919619 1870087 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:01:38.919936 1870087 kapi.go:59] client config for multinode-451668: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/multinode-451668/client.key", CAFile:"/home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 00:01:38.920755 1870087 node_ready.go:35] waiting up to 6m0s for node "multinode-451668-m02" to be "Ready" ...
	I0718 00:01:38.920838 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:38.920844 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:38.920853 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:38.920861 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:38.928464 1870087 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0718 00:01:38.928486 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:38.928494 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:38 GMT
	I0718 00:01:38.928502 1870087 round_trippers.go:580]     Audit-Id: 26a9b259-c04a-46be-b5b1-4afe6a2004c8
	I0718 00:01:38.928508 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:38.928515 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:38.928545 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:38.928556 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:38.928670 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"495","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0718 00:01:39.429793 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:39.429819 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:39.429830 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:39.429837 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:39.432413 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:39.432435 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:39.432444 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:39.432451 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:39.432458 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:39.432465 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:39.432472 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:39 GMT
	I0718 00:01:39.432479 1870087 round_trippers.go:580]     Audit-Id: 98603856-8f2e-4da9-9009-fd45a53e5cf6
	I0718 00:01:39.432577 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:39.929651 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:39.929675 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:39.929686 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:39.929693 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:39.932525 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:39.932553 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:39.932562 1870087 round_trippers.go:580]     Audit-Id: 7fc89849-4f4f-420f-8d0b-7d18b5feaccb
	I0718 00:01:39.932569 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:39.932575 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:39.932582 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:39.932589 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:39.932597 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:39 GMT
	I0718 00:01:39.932741 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:40.429904 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:40.429929 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:40.429939 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:40.429950 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:40.432538 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:40.432561 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:40.432570 1870087 round_trippers.go:580]     Audit-Id: fc222b38-a944-42e1-9b8a-1a999424c0f7
	I0718 00:01:40.432577 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:40.432584 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:40.432629 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:40.432638 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:40.432645 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:40 GMT
	I0718 00:01:40.432754 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:40.929195 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:40.929219 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:40.929229 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:40.929236 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:40.931869 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:40.931894 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:40.931903 1870087 round_trippers.go:580]     Audit-Id: c7179a73-5b4a-4cec-b5e9-9fc44926283c
	I0718 00:01:40.931910 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:40.931917 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:40.931923 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:40.931930 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:40.931939 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:40 GMT
	I0718 00:01:40.932237 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:40.932603 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:41.429269 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:41.429291 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:41.429301 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:41.429309 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:41.432040 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:41.432067 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:41.432075 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:41.432083 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:41.432089 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:41 GMT
	I0718 00:01:41.432096 1870087 round_trippers.go:580]     Audit-Id: aaeeedc4-82cf-485f-932c-81436bc50994
	I0718 00:01:41.432103 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:41.432110 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:41.432248 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:41.929333 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:41.929360 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:41.929373 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:41.929381 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:41.932198 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:41.932228 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:41.932237 1870087 round_trippers.go:580]     Audit-Id: 68a2ccfe-901c-4782-a193-5e6bae0a9cef
	I0718 00:01:41.932244 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:41.932251 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:41.932257 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:41.932264 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:41.932271 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:41 GMT
	I0718 00:01:41.932405 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:42.430005 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:42.430030 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:42.430044 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:42.430054 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:42.432973 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:42.432999 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:42.433008 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:42.433015 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:42 GMT
	I0718 00:01:42.433022 1870087 round_trippers.go:580]     Audit-Id: 6e4cc6c0-6c1a-4539-9902-90186398b897
	I0718 00:01:42.433029 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:42.433035 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:42.433042 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:42.433150 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:42.929804 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:42.929865 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:42.929877 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:42.929885 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:42.932530 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:42.932557 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:42.932567 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:42.932574 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:42.932581 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:42 GMT
	I0718 00:01:42.932589 1870087 round_trippers.go:580]     Audit-Id: aad1e72f-4780-466a-9c47-a330e4c7ca2a
	I0718 00:01:42.932595 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:42.932602 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:42.932725 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:42.933092 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:43.429829 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:43.429852 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:43.429862 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:43.429870 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:43.432439 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:43.432465 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:43.432474 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:43.432482 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:43 GMT
	I0718 00:01:43.432488 1870087 round_trippers.go:580]     Audit-Id: 2f970582-87fe-4a01-b35c-b0d976f92fde
	I0718 00:01:43.432495 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:43.432502 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:43.432509 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:43.432626 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:43.929666 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:43.929689 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:43.929699 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:43.929706 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:43.932317 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:43.932342 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:43.932351 1870087 round_trippers.go:580]     Audit-Id: faa84102-71b2-4d1d-b08f-8f0c0ee978ed
	I0718 00:01:43.932359 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:43.932366 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:43.932372 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:43.932379 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:43.932386 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:43 GMT
	I0718 00:01:43.932699 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:44.429260 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:44.429288 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:44.429300 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:44.429308 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:44.431956 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:44.431983 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:44.431992 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:44.431999 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:44.432006 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:44 GMT
	I0718 00:01:44.432013 1870087 round_trippers.go:580]     Audit-Id: 5a85a2ed-5520-4554-a127-bf53acd8a5f9
	I0718 00:01:44.432020 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:44.432026 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:44.432134 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:44.929193 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:44.929216 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:44.929226 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:44.929234 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:44.931819 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:44.931844 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:44.931853 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:44 GMT
	I0718 00:01:44.931860 1870087 round_trippers.go:580]     Audit-Id: c8577a51-1351-4187-8210-b8242dabcef4
	I0718 00:01:44.931868 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:44.931875 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:44.931882 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:44.931892 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:44.932210 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:45.429665 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:45.429690 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:45.429701 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:45.429709 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:45.432378 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:45.432405 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:45.432415 1870087 round_trippers.go:580]     Audit-Id: 51a646a8-8a63-4db6-9049-662f7e8fcf84
	I0718 00:01:45.432422 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:45.432429 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:45.432436 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:45.432443 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:45.432454 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:45 GMT
	I0718 00:01:45.432582 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:45.432970 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:45.929777 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:45.929798 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:45.929808 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:45.929816 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:45.932301 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:45.932322 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:45.932331 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:45.932338 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:45.932344 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:45.932351 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:45 GMT
	I0718 00:01:45.932358 1870087 round_trippers.go:580]     Audit-Id: 235da807-7abb-47a8-ae9d-15a51d220ead
	I0718 00:01:45.932366 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:45.932585 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:46.429279 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:46.429303 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:46.429314 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:46.429321 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:46.432098 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:46.432123 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:46.432131 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:46.432138 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:46 GMT
	I0718 00:01:46.432145 1870087 round_trippers.go:580]     Audit-Id: 57ba97d3-65e6-416b-8df9-04d782329e84
	I0718 00:01:46.432152 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:46.432159 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:46.432170 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:46.432273 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:46.929291 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:46.929312 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:46.929322 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:46.929330 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:46.931826 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:46.931847 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:46.931855 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:46.931863 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:46.931870 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:46.931876 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:46 GMT
	I0718 00:01:46.931883 1870087 round_trippers.go:580]     Audit-Id: c70b9f37-1a22-4d1d-8178-548f78eea780
	I0718 00:01:46.931889 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:46.932038 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:47.429596 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:47.429616 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:47.429626 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:47.429634 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:47.432262 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:47.432289 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:47.432300 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:47.432307 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:47.432314 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:47.432321 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:47 GMT
	I0718 00:01:47.432327 1870087 round_trippers.go:580]     Audit-Id: 20eb1e96-ce85-4aca-9009-34d0242bc152
	I0718 00:01:47.432334 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:47.432438 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"508","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5292 chars]
	I0718 00:01:47.930197 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:47.930262 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:47.930279 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:47.930287 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:47.932833 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:47.932867 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:47.932883 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:47.932896 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:47 GMT
	I0718 00:01:47.932904 1870087 round_trippers.go:580]     Audit-Id: ba224efa-5004-4254-8e42-207b2683fe44
	I0718 00:01:47.932910 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:47.932917 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:47.932924 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:47.933071 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:47.933524 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:48.429246 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:48.429269 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:48.429279 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:48.429286 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:48.432583 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:48.432608 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:48.432617 1870087 round_trippers.go:580]     Audit-Id: 9daa0f82-4aa5-44a8-b37f-0cea6d16a027
	I0718 00:01:48.432625 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:48.432638 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:48.432645 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:48.432652 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:48.432661 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:48 GMT
	I0718 00:01:48.432765 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:48.929853 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:48.929877 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:48.929886 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:48.929895 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:48.932412 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:48.932436 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:48.932445 1870087 round_trippers.go:580]     Audit-Id: 0c8051c6-ace7-4d25-b54e-f2a6966ebb0f
	I0718 00:01:48.932452 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:48.932462 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:48.932469 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:48.932480 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:48.932492 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:48 GMT
	I0718 00:01:48.932599 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:49.429235 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:49.429263 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:49.429274 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:49.429281 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:49.432456 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:49.432479 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:49.432488 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:49 GMT
	I0718 00:01:49.432495 1870087 round_trippers.go:580]     Audit-Id: 1ac054fb-cbfc-4466-ae1d-dd736851127b
	I0718 00:01:49.432502 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:49.432508 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:49.432515 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:49.432522 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:49.432627 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:49.930086 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:49.930110 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:49.930121 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:49.930129 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:49.932709 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:49.932733 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:49.932742 1870087 round_trippers.go:580]     Audit-Id: 2d269b9b-8c49-4eea-a3cd-f64197b1b894
	I0718 00:01:49.932748 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:49.932755 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:49.932762 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:49.932769 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:49.932775 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:49 GMT
	I0718 00:01:49.932907 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:50.430041 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:50.430060 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:50.430070 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:50.430079 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:50.433718 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:01:50.433742 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:50.433751 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:50.433757 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:50.433764 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:50.433771 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:50 GMT
	I0718 00:01:50.433778 1870087 round_trippers.go:580]     Audit-Id: 0cf47601-a457-4536-a0f9-0af4c697f3be
	I0718 00:01:50.433785 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:50.433891 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:50.434267 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:50.929879 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:50.929905 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:50.929915 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:50.929924 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:50.937240 1870087 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0718 00:01:50.937263 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:50.937272 1870087 round_trippers.go:580]     Audit-Id: 00099a91-4c92-4f7f-86ec-442b1f8d2083
	I0718 00:01:50.937278 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:50.937285 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:50.937292 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:50.937298 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:50.937305 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:50 GMT
	I0718 00:01:50.937415 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:51.429261 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:51.429285 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:51.429296 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:51.429304 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:51.431870 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:51.431893 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:51.431902 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:51.431910 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:51.431917 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:51 GMT
	I0718 00:01:51.431923 1870087 round_trippers.go:580]     Audit-Id: 596291dc-04b2-4071-89a2-779fc3f7b3d2
	I0718 00:01:51.431930 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:51.431936 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:51.432048 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:51.929750 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:51.929771 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:51.929782 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:51.929791 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:51.932346 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:51.932371 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:51.932380 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:51.932387 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:51 GMT
	I0718 00:01:51.932394 1870087 round_trippers.go:580]     Audit-Id: 11d70b7e-3dcb-42c4-acb3-58e8a141a82f
	I0718 00:01:51.932401 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:51.932407 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:51.932414 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:51.933525 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:52.429578 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:52.429602 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:52.429613 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:52.429621 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:52.432288 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:52.432314 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:52.432323 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:52.432330 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:52.432337 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:52.432344 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:52.432352 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:52 GMT
	I0718 00:01:52.432358 1870087 round_trippers.go:580]     Audit-Id: 3a937a0e-7a3e-4204-8bd5-6139dbe6da55
	I0718 00:01:52.432528 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:52.930161 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:52.930186 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:52.930196 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:52.930204 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:52.932702 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:52.932722 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:52.932730 1870087 round_trippers.go:580]     Audit-Id: eacfd506-5c48-496b-bff5-e3881186e358
	I0718 00:01:52.932738 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:52.932744 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:52.932751 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:52.932758 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:52.932764 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:52 GMT
	I0718 00:01:52.932854 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:52.933221 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:53.429956 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:53.429977 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:53.429987 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:53.429995 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:53.432677 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:53.432699 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:53.432708 1870087 round_trippers.go:580]     Audit-Id: 7994d1d8-8ce3-4a0f-bd14-8cdc55ba379c
	I0718 00:01:53.432715 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:53.432721 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:53.432728 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:53.432734 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:53.432741 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:53 GMT
	I0718 00:01:53.432831 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:53.929263 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:53.929284 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:53.929300 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:53.929308 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:53.931717 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:53.931749 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:53.931758 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:53.931765 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:53.931773 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:53 GMT
	I0718 00:01:53.931780 1870087 round_trippers.go:580]     Audit-Id: b45f3fa8-119c-43a5-a2dd-06550872fa05
	I0718 00:01:53.931789 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:53.931796 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:53.931937 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:54.430058 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:54.430080 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:54.430092 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:54.430101 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:54.432690 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:54.432713 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:54.432722 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:54.432730 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:54.432736 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:54.432744 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:54 GMT
	I0718 00:01:54.432751 1870087 round_trippers.go:580]     Audit-Id: 81b4cd95-a612-4374-9eaf-bb64a0cd0ee2
	I0718 00:01:54.432757 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:54.432853 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:54.929267 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:54.929290 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:54.929301 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:54.929308 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:54.931872 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:54.931900 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:54.931909 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:54.931916 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:54.931923 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:54.931930 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:54.931940 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:54 GMT
	I0718 00:01:54.931947 1870087 round_trippers.go:580]     Audit-Id: 0a3e8f75-e6d8-4b1e-b512-e0a5cd135337
	I0718 00:01:54.932123 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:55.429188 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:55.429214 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:55.429225 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:55.429233 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:55.432013 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:55.432040 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:55.432050 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:55.432058 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:55.432064 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:55.432074 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:55.432081 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:55 GMT
	I0718 00:01:55.432088 1870087 round_trippers.go:580]     Audit-Id: 12f884fd-c349-4262-91e9-eea7586f0f7c
	I0718 00:01:55.432237 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:55.432615 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:55.930151 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:55.930173 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:55.930185 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:55.930192 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:55.932628 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:55.932653 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:55.932662 1870087 round_trippers.go:580]     Audit-Id: 9b3071d4-81a2-4032-a36c-4dea901be99b
	I0718 00:01:55.932670 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:55.932676 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:55.932683 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:55.932690 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:55.932697 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:55 GMT
	I0718 00:01:55.932987 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:56.429883 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:56.429912 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:56.429922 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:56.429930 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:56.432517 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:56.432541 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:56.432550 1870087 round_trippers.go:580]     Audit-Id: a39ef20e-3e0a-4cfb-8b60-ea15e6de7390
	I0718 00:01:56.432558 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:56.432566 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:56.432572 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:56.432579 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:56.432589 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:56 GMT
	I0718 00:01:56.432775 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:56.929915 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:56.929935 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:56.929945 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:56.929953 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:56.932587 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:56.932615 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:56.932623 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:56.932630 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:56.932636 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:56.932643 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:56 GMT
	I0718 00:01:56.932649 1870087 round_trippers.go:580]     Audit-Id: 25d9e882-2b6a-4691-ab2d-3568e7b724ed
	I0718 00:01:56.932656 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:56.932758 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:57.429856 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:57.429881 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:57.429892 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:57.429900 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:57.432672 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:57.432699 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:57.432710 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:57 GMT
	I0718 00:01:57.432717 1870087 round_trippers.go:580]     Audit-Id: d09dd061-15c4-4b94-b657-ea15c8b43c7e
	I0718 00:01:57.432724 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:57.432730 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:57.432737 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:57.432744 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:57.432833 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:57.433209 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:01:57.929511 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:57.929532 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:57.929545 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:57.929552 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:57.932195 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:57.932223 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:57.932231 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:57.932239 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:57 GMT
	I0718 00:01:57.932246 1870087 round_trippers.go:580]     Audit-Id: 0dd5eaa4-5396-4fea-8eef-0f22d9e9b165
	I0718 00:01:57.932252 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:57.932259 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:57.932265 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:57.932392 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:58.429272 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:58.429296 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:58.429306 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:58.429314 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:58.431908 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:58.431931 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:58.431939 1870087 round_trippers.go:580]     Audit-Id: b3c35fa9-f814-468a-8b4a-17887160608a
	I0718 00:01:58.431946 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:58.431952 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:58.431959 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:58.431966 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:58.431973 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:58 GMT
	I0718 00:01:58.432066 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:58.930173 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:58.930194 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:58.930205 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:58.930213 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:58.932784 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:58.932808 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:58.932816 1870087 round_trippers.go:580]     Audit-Id: 8e9a236b-dc76-4780-91f0-bbeae0e35ab3
	I0718 00:01:58.932823 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:58.932830 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:58.932836 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:58.932843 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:58.932851 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:58 GMT
	I0718 00:01:58.932962 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:59.429502 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:59.429526 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:59.429537 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:59.429544 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:59.432160 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:59.432187 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:59.432196 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:59 GMT
	I0718 00:01:59.432204 1870087 round_trippers.go:580]     Audit-Id: cf054361-00ec-43db-bb04-7279b6fc9c1a
	I0718 00:01:59.432211 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:59.432217 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:59.432225 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:59.432232 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:59.432330 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:59.929676 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:01:59.929699 1870087 round_trippers.go:469] Request Headers:
	I0718 00:01:59.929710 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:01:59.929720 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:01:59.932436 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:01:59.932466 1870087 round_trippers.go:577] Response Headers:
	I0718 00:01:59.932476 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:01:59.932483 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:01:59 GMT
	I0718 00:01:59.932491 1870087 round_trippers.go:580]     Audit-Id: 9b3f0ed9-e3bf-4bb0-ab8b-6c16bbe2199f
	I0718 00:01:59.932498 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:01:59.932505 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:01:59.932515 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:01:59.932634 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:01:59.933020 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:02:00.429264 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:00.429286 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:00.429296 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:00.429304 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:00.432509 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:02:00.432540 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:00.432550 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:00.432558 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:00.432565 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:00.432572 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:00.432584 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:00 GMT
	I0718 00:02:00.432591 1870087 round_trippers.go:580]     Audit-Id: ee366e10-9bbd-43ec-9ffb-468dc2c46dd9
	I0718 00:02:00.432687 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:00.929808 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:00.929829 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:00.929839 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:00.929847 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:00.932563 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:00.932588 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:00.932599 1870087 round_trippers.go:580]     Audit-Id: 925ac567-9565-404a-9daa-9b184fab5f17
	I0718 00:02:00.932612 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:00.932619 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:00.932626 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:00.932632 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:00.932640 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:00 GMT
	I0718 00:02:00.932755 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:01.430010 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:01.430034 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:01.430044 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:01.430052 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:01.432830 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:01.432859 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:01.432867 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:01.432875 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:01.432882 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:01 GMT
	I0718 00:02:01.432916 1870087 round_trippers.go:580]     Audit-Id: f3292ee4-c959-48a4-9570-14cf76039255
	I0718 00:02:01.432930 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:01.432939 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:01.433039 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:01.929556 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:01.929581 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:01.929598 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:01.929606 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:01.932427 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:01.932475 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:01.932485 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:01.932492 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:01.932500 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:01 GMT
	I0718 00:02:01.932507 1870087 round_trippers.go:580]     Audit-Id: 5f1619d8-8d6f-4b45-8076-eb0ffc2409d4
	I0718 00:02:01.932516 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:01.932523 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:01.932923 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:01.933303 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:02:02.429274 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:02.429299 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:02.429309 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:02.429317 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:02.431959 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:02.431980 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:02.431989 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:02 GMT
	I0718 00:02:02.431996 1870087 round_trippers.go:580]     Audit-Id: 56530bc4-b7cb-4434-b30f-7e14bc7fa483
	I0718 00:02:02.432003 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:02.432010 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:02.432016 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:02.432023 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:02.432123 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:02.929620 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:02.929644 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:02.929657 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:02.929665 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:02.932237 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:02.932262 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:02.932271 1870087 round_trippers.go:580]     Audit-Id: 4d1048e5-138f-483c-8c18-8f35e5a634a3
	I0718 00:02:02.932279 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:02.932285 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:02.932292 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:02.932299 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:02.932306 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:02 GMT
	I0718 00:02:02.932540 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:03.429204 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:03.429226 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:03.429237 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:03.429244 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:03.431930 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:03.431952 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:03.431963 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:03 GMT
	I0718 00:02:03.431970 1870087 round_trippers.go:580]     Audit-Id: d6ce6ece-385b-480e-8662-35e67fa36eb9
	I0718 00:02:03.431977 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:03.431983 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:03.431990 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:03.431996 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:03.432156 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:03.929840 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:03.929864 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:03.929874 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:03.929882 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:03.932391 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:03.932416 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:03.932424 1870087 round_trippers.go:580]     Audit-Id: 5d9f030a-b89a-48ed-a8f4-4aa1f75ab6bf
	I0718 00:02:03.932431 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:03.932437 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:03.932444 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:03.932451 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:03.932461 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:03 GMT
	I0718 00:02:03.932653 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:04.429798 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:04.429820 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:04.429831 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:04.429838 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:04.432532 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:04.432556 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:04.432565 1870087 round_trippers.go:580]     Audit-Id: 905f107d-9b1e-444b-ae35-a5141bd45ba0
	I0718 00:02:04.432572 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:04.432578 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:04.432585 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:04.432592 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:04.432599 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:04 GMT
	I0718 00:02:04.432712 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:04.433088 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:02:04.929823 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:04.929847 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:04.929857 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:04.929865 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:04.932521 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:04.932548 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:04.932557 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:04.932564 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:04.932570 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:04.932578 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:04 GMT
	I0718 00:02:04.932584 1870087 round_trippers.go:580]     Audit-Id: c48c299f-101e-42c4-ab8b-724ca6531a92
	I0718 00:02:04.932591 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:04.932695 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:05.429893 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:05.429915 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:05.429927 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:05.429935 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:05.432592 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:05.432617 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:05.432626 1870087 round_trippers.go:580]     Audit-Id: f28449b4-0ed5-45c1-957a-2c8171d4c9bd
	I0718 00:02:05.432634 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:05.432641 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:05.432647 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:05.432654 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:05.432661 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:05 GMT
	I0718 00:02:05.432743 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:05.930004 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:05.930030 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:05.930040 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:05.930047 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:05.932589 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:05.932619 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:05.932628 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:05.932637 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:05.932645 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:05.932653 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:05.932668 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:05 GMT
	I0718 00:02:05.932678 1870087 round_trippers.go:580]     Audit-Id: 85a0b7a4-6cce-4386-97b1-f0fbdaeac8f7
	I0718 00:02:05.932967 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:06.430227 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:06.430257 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:06.430268 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:06.430285 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:06.432814 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:06.432836 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:06.432845 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:06.432852 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:06.432859 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:06.432866 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:06.432873 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:06 GMT
	I0718 00:02:06.432880 1870087 round_trippers.go:580]     Audit-Id: 0bc5e459-bf3f-49c9-b7ce-e60471e6e2e3
	I0718 00:02:06.433034 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:06.433433 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:02:06.929628 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:06.929650 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:06.929660 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:06.929668 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:06.932523 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:06.932544 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:06.932553 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:06.932560 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:06 GMT
	I0718 00:02:06.932567 1870087 round_trippers.go:580]     Audit-Id: d84e1631-e595-41c1-b1ee-654b01787246
	I0718 00:02:06.932574 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:06.932580 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:06.932587 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:06.932718 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:07.429813 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:07.429836 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:07.429846 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:07.429854 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:07.432473 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:07.432499 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:07.432509 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:07.432516 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:07.432524 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:07 GMT
	I0718 00:02:07.432534 1870087 round_trippers.go:580]     Audit-Id: 2b8b9c7b-8ab9-495e-9b60-807537c63383
	I0718 00:02:07.432545 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:07.432552 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:07.432679 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:07.929435 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:07.929458 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:07.929469 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:07.929477 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:07.932081 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:07.932110 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:07.932119 1870087 round_trippers.go:580]     Audit-Id: 6c662bb4-435d-43d7-8376-17e6e1841d76
	I0718 00:02:07.932126 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:07.932138 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:07.932151 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:07.932163 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:07.932171 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:07 GMT
	I0718 00:02:07.932352 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:08.429438 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:08.429459 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:08.429470 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:08.429478 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:08.431939 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:08.431966 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:08.431976 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:08.431984 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:08.431994 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:08 GMT
	I0718 00:02:08.432001 1870087 round_trippers.go:580]     Audit-Id: 5c706c0f-3fa2-4263-824a-0829d07eb468
	I0718 00:02:08.432007 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:08.432014 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:08.432311 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:08.929755 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:08.929780 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:08.929790 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:08.929798 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:08.932431 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:08.932455 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:08.932466 1870087 round_trippers.go:580]     Audit-Id: c964bc6f-f94c-4237-813c-8e4e4f2c87a0
	I0718 00:02:08.932473 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:08.932486 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:08.932494 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:08.932501 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:08.932508 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:08 GMT
	I0718 00:02:08.932809 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"520","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5561 chars]
	I0718 00:02:08.933202 1870087 node_ready.go:58] node "multinode-451668-m02" has status "Ready":"False"
	I0718 00:02:09.429404 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:09.429428 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.429438 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.429446 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.432000 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.432021 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.432030 1870087 round_trippers.go:580]     Audit-Id: 2764bbba-89ee-4760-927a-e3b3fa042d83
	I0718 00:02:09.432037 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.432044 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.432051 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.432058 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.432064 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.432167 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"542","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I0718 00:02:09.432530 1870087 node_ready.go:49] node "multinode-451668-m02" has status "Ready":"True"
	I0718 00:02:09.432541 1870087 node_ready.go:38] duration metric: took 30.511764139s waiting for node "multinode-451668-m02" to be "Ready" ...
	I0718 00:02:09.432550 1870087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 00:02:09.432649 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0718 00:02:09.432654 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.432662 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.432669 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.436252 1870087 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 00:02:09.436276 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.436286 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.436293 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.436300 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.436307 1870087 round_trippers.go:580]     Audit-Id: 25036d7d-9b39-41bb-8166-b2ae3c48ecbe
	I0718 00:02:09.436313 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.436320 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.437347 1870087 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"455","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0718 00:02:09.440323 1870087 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qvgbw" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.440412 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qvgbw
	I0718 00:02:09.440425 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.440435 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.440445 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.445169 1870087 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0718 00:02:09.445192 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.445201 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.445208 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.445216 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.445223 1870087 round_trippers.go:580]     Audit-Id: 9bde4ed4-d6d9-4c26-af7a-640b63eda7df
	I0718 00:02:09.445234 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.445242 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.445341 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qvgbw","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9d2a4d36-002a-4117-b0ec-2c58b2b7249b","resourceVersion":"455","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cabbe8e-0e5c-43eb-80c2-cd9f231da99d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0718 00:02:09.445873 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:09.445889 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.445898 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.445905 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.448377 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.448404 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.448413 1870087 round_trippers.go:580]     Audit-Id: 7084b7c5-e9b5-40e5-b176-235f6fc434e7
	I0718 00:02:09.448420 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.448427 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.448445 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.448452 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.448462 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.448781 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:02:09.449180 1870087 pod_ready.go:92] pod "coredns-5d78c9869d-qvgbw" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:09.449198 1870087 pod_ready.go:81] duration metric: took 8.845942ms waiting for pod "coredns-5d78c9869d-qvgbw" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.449209 1870087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.449275 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-451668
	I0718 00:02:09.449285 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.449293 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.449300 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.451894 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.451918 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.451927 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.451933 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.451940 1870087 round_trippers.go:580]     Audit-Id: 5c467035-ec48-444e-a49d-1fe0d3123ff2
	I0718 00:02:09.451946 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.451953 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.451964 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.452215 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-451668","namespace":"kube-system","uid":"ff35a53d-a680-4948-89ae-4b41390d5766","resourceVersion":"429","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"7dfc83176ac111a8be324df9a81beceb","kubernetes.io/config.mirror":"7dfc83176ac111a8be324df9a81beceb","kubernetes.io/config.seen":"2023-07-18T00:00:37.124825456Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0718 00:02:09.452723 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:09.452738 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.452747 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.452758 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.455166 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.455187 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.455196 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.455202 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.455209 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.455215 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.455223 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.455229 1870087 round_trippers.go:580]     Audit-Id: 616132bc-134d-4ed0-b8e2-a70b27b56fcb
	I0718 00:02:09.455379 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:02:09.455803 1870087 pod_ready.go:92] pod "etcd-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:09.455820 1870087 pod_ready.go:81] duration metric: took 6.599631ms waiting for pod "etcd-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.455839 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.455901 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-451668
	I0718 00:02:09.455911 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.455920 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.455927 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.458372 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.458480 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.458494 1870087 round_trippers.go:580]     Audit-Id: d74de0ae-564b-4c29-835a-d4dbb77067f6
	I0718 00:02:09.458501 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.458508 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.458515 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.458526 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.458533 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.458977 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-451668","namespace":"kube-system","uid":"67421618-9334-4da3-b70c-4df5028a3e13","resourceVersion":"426","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b21d9f1735bd99f21ac6a561db59b8b7","kubernetes.io/config.mirror":"b21d9f1735bd99f21ac6a561db59b8b7","kubernetes.io/config.seen":"2023-07-18T00:00:37.124827679Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0718 00:02:09.459554 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:09.459572 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.459582 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.459590 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.462042 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.462065 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.462075 1870087 round_trippers.go:580]     Audit-Id: 541ee97d-9cba-4d46-b920-a0c748a4fc53
	I0718 00:02:09.462082 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.462088 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.462102 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.462109 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.462115 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.462481 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:02:09.462909 1870087 pod_ready.go:92] pod "kube-apiserver-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:09.462929 1870087 pod_ready.go:81] duration metric: took 7.078674ms waiting for pod "kube-apiserver-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.462940 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.463005 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-451668
	I0718 00:02:09.463015 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.463024 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.463031 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.465754 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.465780 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.465790 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.465798 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.465810 1870087 round_trippers.go:580]     Audit-Id: 23b92397-bbf3-4d2f-911f-c5461dcc3006
	I0718 00:02:09.465817 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.465830 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.465842 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.465961 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-451668","namespace":"kube-system","uid":"873eb02d-decd-42d6-a94b-e93f4248f3b8","resourceVersion":"427","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e03e0ec870f5198b407028f8bd83bcde","kubernetes.io/config.mirror":"e03e0ec870f5198b407028f8bd83bcde","kubernetes.io/config.seen":"2023-07-18T00:00:37.124818055Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0718 00:02:09.466522 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:09.466538 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.466547 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.466554 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.468972 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.468997 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.469006 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.469013 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.469028 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.469036 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.469047 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.469054 1870087 round_trippers.go:580]     Audit-Id: 135b8bd1-d4bb-4dc3-ad12-7405f83adc3c
	I0718 00:02:09.469338 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:02:09.469728 1870087 pod_ready.go:92] pod "kube-controller-manager-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:09.469743 1870087 pod_ready.go:81] duration metric: took 6.793089ms waiting for pod "kube-controller-manager-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.469757 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7knpj" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.630304 1870087 request.go:628] Waited for 160.464499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7knpj
	I0718 00:02:09.630377 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7knpj
	I0718 00:02:09.630387 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.630399 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.630431 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.633139 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.633170 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.633180 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.633187 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.633194 1870087 round_trippers.go:580]     Audit-Id: 3f6d2943-10dc-47a3-aef9-e527aeef1236
	I0718 00:02:09.633200 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.633207 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.633218 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.633348 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7knpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6cebdce-80d9-4b8b-8ea5-415bb18d1f07","resourceVersion":"420","creationTimestamp":"2023-07-18T00:00:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6f75b6d3-9814-4f6b-8118-2be5ffd5c4e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f75b6d3-9814-4f6b-8118-2be5ffd5c4e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0718 00:02:09.830151 1870087 request.go:628] Waited for 196.318473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:09.830217 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:09.830222 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:09.830232 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:09.830243 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:09.832624 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:09.832650 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:09.832658 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:09.832665 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:09.832674 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:09 GMT
	I0718 00:02:09.832681 1870087 round_trippers.go:580]     Audit-Id: fcb9699e-21e9-4c8e-84de-95a2f3570c2f
	I0718 00:02:09.832688 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:09.832697 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:09.832813 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:02:09.833204 1870087 pod_ready.go:92] pod "kube-proxy-7knpj" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:09.833221 1870087 pod_ready.go:81] duration metric: took 363.45186ms waiting for pod "kube-proxy-7knpj" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:09.833232 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wm797" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:10.029534 1870087 request.go:628] Waited for 196.229292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm797
	I0718 00:02:10.029660 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm797
	I0718 00:02:10.029673 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:10.029684 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:10.029692 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:10.032587 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:10.032660 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:10.032686 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:10.032712 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:10.032734 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:10.032743 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:10 GMT
	I0718 00:02:10.032750 1870087 round_trippers.go:580]     Audit-Id: b2cae5ac-d0a5-426d-930c-d2012639e057
	I0718 00:02:10.032757 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:10.032886 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wm797","generateName":"kube-proxy-","namespace":"kube-system","uid":"582ad1c5-8f1d-4f45-be3c-32afeab4c4ca","resourceVersion":"509","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6f75b6d3-9814-4f6b-8118-2be5ffd5c4e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f75b6d3-9814-4f6b-8118-2be5ffd5c4e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0718 00:02:10.229806 1870087 request.go:628] Waited for 196.368319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:10.229905 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668-m02
	I0718 00:02:10.229934 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:10.229950 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:10.229958 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:10.232528 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:10.232558 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:10.232567 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:10.232575 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:10 GMT
	I0718 00:02:10.232581 1870087 round_trippers.go:580]     Audit-Id: 71d5b794-5a2e-4269-b035-c174f4a25a58
	I0718 00:02:10.232588 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:10.232606 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:10.232617 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:10.233036 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668-m02","uid":"a5189875-d185-4339-9499-a5adf8a31338","resourceVersion":"542","creationTimestamp":"2023-07-18T00:01:37Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I0718 00:02:10.233429 1870087 pod_ready.go:92] pod "kube-proxy-wm797" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:10.233449 1870087 pod_ready.go:81] duration metric: took 400.202367ms waiting for pod "kube-proxy-wm797" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:10.233461 1870087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:10.429888 1870087 request.go:628] Waited for 196.360728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-451668
	I0718 00:02:10.429968 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-451668
	I0718 00:02:10.429998 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:10.430013 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:10.430027 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:10.432640 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:10.432673 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:10.432697 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:10.432713 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:10.432721 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:10.432732 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:10.432754 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:10 GMT
	I0718 00:02:10.432772 1870087 round_trippers.go:580]     Audit-Id: c5eeebe5-cc48-40ed-aa94-d2a379360629
	I0718 00:02:10.432914 1870087 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-451668","namespace":"kube-system","uid":"be313f6d-3c25-4ace-a780-aa89145c91c2","resourceVersion":"428","creationTimestamp":"2023-07-18T00:00:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9065ff40d6e81cfa36e7ba470cd8a37f","kubernetes.io/config.mirror":"9065ff40d6e81cfa36e7ba470cd8a37f","kubernetes.io/config.seen":"2023-07-18T00:00:37.124823954Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-18T00:00:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0718 00:02:10.629724 1870087 request.go:628] Waited for 196.339223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:10.629798 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-451668
	I0718 00:02:10.629809 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:10.629819 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:10.629826 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:10.632494 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:10.632520 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:10.632530 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:10 GMT
	I0718 00:02:10.632537 1870087 round_trippers.go:580]     Audit-Id: d5fc0118-856c-44ea-9f01-360e2dbb3ac5
	I0718 00:02:10.632588 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:10.632604 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:10.632612 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:10.632618 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:10.632784 1870087 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-18T00:00:33Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0718 00:02:10.633205 1870087 pod_ready.go:92] pod "kube-scheduler-multinode-451668" in "kube-system" namespace has status "Ready":"True"
	I0718 00:02:10.633223 1870087 pod_ready.go:81] duration metric: took 399.751483ms waiting for pod "kube-scheduler-multinode-451668" in "kube-system" namespace to be "Ready" ...
	I0718 00:02:10.633236 1870087 pod_ready.go:38] duration metric: took 1.200675548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 00:02:10.633251 1870087 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 00:02:10.633316 1870087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 00:02:10.646913 1870087 system_svc.go:56] duration metric: took 13.652531ms WaitForService to wait for kubelet.
	I0718 00:02:10.646938 1870087 kubeadm.go:581] duration metric: took 31.747801424s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0718 00:02:10.646959 1870087 node_conditions.go:102] verifying NodePressure condition ...
	I0718 00:02:10.830379 1870087 request.go:628] Waited for 183.328285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0718 00:02:10.830467 1870087 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0718 00:02:10.830477 1870087 round_trippers.go:469] Request Headers:
	I0718 00:02:10.830487 1870087 round_trippers.go:473]     Accept: application/json, */*
	I0718 00:02:10.830515 1870087 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0718 00:02:10.833228 1870087 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 00:02:10.833259 1870087 round_trippers.go:577] Response Headers:
	I0718 00:02:10.833269 1870087 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 00:02:10.833292 1870087 round_trippers.go:580]     Content-Type: application/json
	I0718 00:02:10.833315 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5d1ee4da-948f-478b-9421-b748701691b0
	I0718 00:02:10.833337 1870087 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c735fc0f-63a0-4ac3-9c60-bda3fe4c55c5
	I0718 00:02:10.833348 1870087 round_trippers.go:580]     Date: Tue, 18 Jul 2023 00:02:10 GMT
	I0718 00:02:10.833378 1870087 round_trippers.go:580]     Audit-Id: acb9e80e-d53a-4e1a-ab90-c2f272a279f9
	I0718 00:02:10.833584 1870087 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"544"},"items":[{"metadata":{"name":"multinode-451668","uid":"0a3e339a-edff-4f2a-aa55-7a3d620bc6f9","resourceVersion":"439","creationTimestamp":"2023-07-18T00:00:33Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-451668","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-451668","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_18T00_00_38_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I0718 00:02:10.834353 1870087 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0718 00:02:10.834377 1870087 node_conditions.go:123] node cpu capacity is 2
	I0718 00:02:10.834396 1870087 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0718 00:02:10.834404 1870087 node_conditions.go:123] node cpu capacity is 2
	I0718 00:02:10.834433 1870087 node_conditions.go:105] duration metric: took 187.468174ms to run NodePressure ...
	I0718 00:02:10.834453 1870087 start.go:228] waiting for startup goroutines ...
	I0718 00:02:10.834483 1870087 start.go:242] writing updated cluster config ...
	I0718 00:02:10.834879 1870087 ssh_runner.go:195] Run: rm -f paused
	I0718 00:02:10.899360 1870087 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0718 00:02:10.902782 1870087 out.go:177] * Done! kubectl is now configured to use "multinode-451668" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 18 00:01:22 multinode-451668 crio[902]: time="2023-07-18 00:01:22.043169851Z" level=info msg="Starting container: be726d5ce161bcee4f0281c3b8c1ecb010085cca9c311ea68869394d65352c09" id=4b4dce1a-6d5b-4881-89e6-472910832daa name=/runtime.v1.RuntimeService/StartContainer
	Jul 18 00:01:22 multinode-451668 crio[902]: time="2023-07-18 00:01:22.058552605Z" level=info msg="Created container aac14a841a48ccd1b77979263b532b48bc5e5d248756a9100083fec3161fc1cb: kube-system/storage-provisioner/storage-provisioner" id=c0badf34-67f3-4827-a7ea-04f487fcdd2b name=/runtime.v1.RuntimeService/CreateContainer
	Jul 18 00:01:22 multinode-451668 crio[902]: time="2023-07-18 00:01:22.059479956Z" level=info msg="Starting container: aac14a841a48ccd1b77979263b532b48bc5e5d248756a9100083fec3161fc1cb" id=548d3de1-7fe5-4590-bff5-22de9d4fca8e name=/runtime.v1.RuntimeService/StartContainer
	Jul 18 00:01:22 multinode-451668 crio[902]: time="2023-07-18 00:01:22.065761501Z" level=info msg="Started container" PID=1928 containerID=be726d5ce161bcee4f0281c3b8c1ecb010085cca9c311ea68869394d65352c09 description=kube-system/coredns-5d78c9869d-qvgbw/coredns id=4b4dce1a-6d5b-4881-89e6-472910832daa name=/runtime.v1.RuntimeService/StartContainer sandboxID=b009ea7ae50915fdeba5ed3af462a9c6e5d0bff0b0638ad099606d3d6f0a2f9b
	Jul 18 00:01:22 multinode-451668 crio[902]: time="2023-07-18 00:01:22.082466818Z" level=info msg="Started container" PID=1935 containerID=aac14a841a48ccd1b77979263b532b48bc5e5d248756a9100083fec3161fc1cb description=kube-system/storage-provisioner/storage-provisioner id=548d3de1-7fe5-4590-bff5-22de9d4fca8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5da8b939344b1b2a3087aad6eae9a14b8b3d9f716e0deb66708ddb5306b7da2f
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.122943719Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-d4jjr/POD" id=4774cff7-19de-477f-9735-dd55c2214fd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.123013717Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.144362934Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-d4jjr Namespace:default ID:08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9 UID:1312769c-96cf-47d4-8989-7e0e7d0a1e1a NetNS:/var/run/netns/ac3b22b6-6dc0-4858-844e-8f9b4e36eafd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.144411270Z" level=info msg="Adding pod default_busybox-67b7f59bb-d4jjr to CNI network \"kindnet\" (type=ptp)"
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.172200300Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-d4jjr Namespace:default ID:08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9 UID:1312769c-96cf-47d4-8989-7e0e7d0a1e1a NetNS:/var/run/netns/ac3b22b6-6dc0-4858-844e-8f9b4e36eafd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.172358657Z" level=info msg="Checking pod default_busybox-67b7f59bb-d4jjr for CNI network kindnet (type=ptp)"
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.181720253Z" level=info msg="Ran pod sandbox 08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9 with infra container: default/busybox-67b7f59bb-d4jjr/POD" id=4774cff7-19de-477f-9735-dd55c2214fd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.182998255Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c83f9954-8db6-42af-ac35-bd863b2574e0 name=/runtime.v1.ImageService/ImageStatus
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.183235652Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=c83f9954-8db6-42af-ac35-bd863b2574e0 name=/runtime.v1.ImageService/ImageStatus
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.186387924Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=f3bfc6db-2869-48d3-803f-0ba3b5f6adc6 name=/runtime.v1.ImageService/PullImage
	Jul 18 00:02:12 multinode-451668 crio[902]: time="2023-07-18 00:02:12.188442024Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 18 00:02:13 multinode-451668 crio[902]: time="2023-07-18 00:02:13.263459198Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.660190345Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=f3bfc6db-2869-48d3-803f-0ba3b5f6adc6 name=/runtime.v1.ImageService/PullImage
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.661269735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=28a5316a-3d43-407c-91de-a3eb0aa04358 name=/runtime.v1.ImageService/ImageStatus
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.661892916Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=28a5316a-3d43-407c-91de-a3eb0aa04358 name=/runtime.v1.ImageService/ImageStatus
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.662804792Z" level=info msg="Creating container: default/busybox-67b7f59bb-d4jjr/busybox" id=1535236b-f508-49de-9890-48d87dba76ca name=/runtime.v1.RuntimeService/CreateContainer
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.662901316Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.741552712Z" level=info msg="Created container 7ec9f63346da1d8216f90ab14322c2e98414255c13d823cba875aeef1dc16da5: default/busybox-67b7f59bb-d4jjr/busybox" id=1535236b-f508-49de-9890-48d87dba76ca name=/runtime.v1.RuntimeService/CreateContainer
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.742270875Z" level=info msg="Starting container: 7ec9f63346da1d8216f90ab14322c2e98414255c13d823cba875aeef1dc16da5" id=1c5a6ab6-bb79-465a-b081-b7ba8d4f1371 name=/runtime.v1.RuntimeService/StartContainer
	Jul 18 00:02:14 multinode-451668 crio[902]: time="2023-07-18 00:02:14.753839185Z" level=info msg="Started container" PID=2079 containerID=7ec9f63346da1d8216f90ab14322c2e98414255c13d823cba875aeef1dc16da5 description=default/busybox-67b7f59bb-d4jjr/busybox id=1c5a6ab6-bb79-465a-b081-b7ba8d4f1371 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7ec9f63346da1       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   08749841a4efe       busybox-67b7f59bb-d4jjr
	be726d5ce161b       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   b009ea7ae5091       coredns-5d78c9869d-qvgbw
	aac14a841a48c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   5da8b939344b1       storage-provisioner
	813d95b28c643       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a                                      About a minute ago   Running             kube-proxy                0                   b1d0f012bfda4       kube-proxy-7knpj
	b3b272e7e8b7e       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   4eb1624fdeb0f       kindnet-jcxjg
	3939f14e5a388       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      About a minute ago   Running             etcd                      0                   6630371ebe988       etcd-multinode-451668
	ece5258048d08       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540                                      About a minute ago   Running             kube-scheduler            0                   b7fdb4d8a2f37       kube-scheduler-multinode-451668
	f38376f93d96a       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473                                      About a minute ago   Running             kube-apiserver            0                   7e459105a947e       kube-apiserver-multinode-451668
	7d62212a6dd62       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8                                      About a minute ago   Running             kube-controller-manager   0                   0477547b5659e       kube-controller-manager-multinode-451668
	
	* 
	* ==> coredns [be726d5ce161bcee4f0281c3b8c1ecb010085cca9c311ea68869394d65352c09] <==
	* [INFO] 10.244.0.3:45464 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083347s
	[INFO] 10.244.1.2:60613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141332s
	[INFO] 10.244.1.2:57444 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001332819s
	[INFO] 10.244.1.2:35693 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089468s
	[INFO] 10.244.1.2:52825 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075946s
	[INFO] 10.244.1.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001311732s
	[INFO] 10.244.1.2:52149 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105779s
	[INFO] 10.244.1.2:37538 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079154s
	[INFO] 10.244.1.2:49218 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080237s
	[INFO] 10.244.0.3:34070 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114444s
	[INFO] 10.244.0.3:35514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007648s
	[INFO] 10.244.0.3:58602 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098149s
	[INFO] 10.244.0.3:45284 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052988s
	[INFO] 10.244.1.2:49524 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154673s
	[INFO] 10.244.1.2:34101 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086251s
	[INFO] 10.244.1.2:39501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078013s
	[INFO] 10.244.1.2:36728 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089558s
	[INFO] 10.244.0.3:57029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102653s
	[INFO] 10.244.0.3:44460 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131978s
	[INFO] 10.244.0.3:48016 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120885s
	[INFO] 10.244.0.3:43660 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089714s
	[INFO] 10.244.1.2:36993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104023s
	[INFO] 10.244.1.2:46263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073616s
	[INFO] 10.244.1.2:37036 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064491s
	[INFO] 10.244.1.2:36991 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074248s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-451668
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-451668
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=multinode-451668
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_18T00_00_38_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 18 Jul 2023 00:00:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-451668
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 18 Jul 2023 00:02:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 18 Jul 2023 00:01:21 +0000   Tue, 18 Jul 2023 00:00:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 18 Jul 2023 00:01:21 +0000   Tue, 18 Jul 2023 00:00:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 18 Jul 2023 00:01:21 +0000   Tue, 18 Jul 2023 00:00:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 18 Jul 2023 00:01:21 +0000   Tue, 18 Jul 2023 00:01:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-451668
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8519520c4534c23b4c332f94b7adb5c
	  System UUID:                4713cf57-4972-4c60-9442-6976a00e656f
	  Boot ID:                    233fb95c-536d-4fc4-882b-c04fac35e1a2
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-d4jjr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5d78c9869d-qvgbw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-multinode-451668                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kindnet-jcxjg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-multinode-451668             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-controller-manager-multinode-451668    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-7knpj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-multinode-451668             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node multinode-451668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node multinode-451668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x8 over 111s)  kubelet          Node multinode-451668 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node multinode-451668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node multinode-451668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node multinode-451668 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                  node-controller  Node multinode-451668 event: Registered Node multinode-451668 in Controller
	  Normal  NodeReady                59s                  kubelet          Node multinode-451668 status is now: NodeReady
	
	
	Name:               multinode-451668-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-451668-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 18 Jul 2023 00:01:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-451668-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 18 Jul 2023 00:02:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 18 Jul 2023 00:02:09 +0000   Tue, 18 Jul 2023 00:01:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 18 Jul 2023 00:02:09 +0000   Tue, 18 Jul 2023 00:01:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 18 Jul 2023 00:02:09 +0000   Tue, 18 Jul 2023 00:01:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 18 Jul 2023 00:02:09 +0000   Tue, 18 Jul 2023 00:02:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-451668-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bbde0c707dd44c2a3c220719b1176ac
	  System UUID:                10d6c13b-0c84-4a5c-a5af-282770b3d2e6
	  Boot ID:                    233fb95c-536d-4fc4-882b-c04fac35e1a2
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-qfp74    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-whgc7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-wm797           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-451668-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-451668-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-451668-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node multinode-451668-m02 event: Registered Node multinode-451668-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-451668-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001025] FS-Cache: O-key=[8] '8b663b0000000000'
	[  +0.000680] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000a000c51
	[  +0.001041] FS-Cache: N-key=[8] '8b663b0000000000'
	[  +0.002357] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=000000005904d9c7
	[  +0.001053] FS-Cache: O-key=[8] '8b663b0000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001085] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=00000000c127b604
	[  +0.001087] FS-Cache: N-key=[8] '8b663b0000000000'
	[  +3.135902] FS-Cache: Duplicate cookie detected
	[  +0.000798] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000945] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=000000000e847315
	[  +0.001098] FS-Cache: O-key=[8] '8a663b0000000000'
	[  +0.000779] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=000000000a000c51
	[  +0.001045] FS-Cache: N-key=[8] '8a663b0000000000'
	[  +0.290809] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000620abd40{9p.inode} n=0000000051211df6
	[  +0.001103] FS-Cache: O-key=[8] '90663b0000000000'
	[  +0.000713] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000620abd40{9p.inode} n=00000000e78c3482
	[  +0.001059] FS-Cache: N-key=[8] '90663b0000000000'
	
	* 
	* ==> etcd [3939f14e5a388025610e77efca65c625090748cf0f77e9e57b63f246a7cd94c9] <==
	* {"level":"info","ts":"2023-07-18T00:00:30.084Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-18T00:00:30.085Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-18T00:00:30.085Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-18T00:00:30.085Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-18T00:00:30.085Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-18T00:00:30.094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-07-18T00:00:30.094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-07-18T00:00:30.119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-18T00:00:30.119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-18T00:00:30.119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-07-18T00:00:30.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-07-18T00:00:30.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-18T00:00:30.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-07-18T00:00:30.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-18T00:00:30.122Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-18T00:00:30.130Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-451668 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-18T00:00:30.130Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-18T00:00:30.132Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-07-18T00:00:30.133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-18T00:00:30.133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-18T00:00:30.133Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-18T00:00:30.142Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-18T00:00:30.143Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-18T00:00:30.147Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-18T00:00:30.147Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:02:20 up  8:44,  0 users,  load average: 0.79, 1.49, 1.69
	Linux multinode-451668 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [b3b272e7e8b7e4044e63eb0961327e1eae4a5b510149759079424378020b6ae0] <==
	* I0718 00:00:50.777783       1 main.go:116] setting mtu 1500 for CNI 
	I0718 00:00:50.777793       1 main.go:146] kindnetd IP family: "ipv4"
	I0718 00:00:50.777808       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0718 00:01:21.048294       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0718 00:01:21.063121       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0718 00:01:21.063155       1 main.go:227] handling current node
	I0718 00:01:31.078727       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0718 00:01:31.078843       1 main.go:227] handling current node
	I0718 00:01:41.091180       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0718 00:01:41.091210       1 main.go:227] handling current node
	I0718 00:01:41.091224       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0718 00:01:41.091230       1 main.go:250] Node multinode-451668-m02 has CIDR [10.244.1.0/24] 
	I0718 00:01:41.091355       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0718 00:01:51.096586       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0718 00:01:51.096616       1 main.go:227] handling current node
	I0718 00:01:51.096628       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0718 00:01:51.096634       1 main.go:250] Node multinode-451668-m02 has CIDR [10.244.1.0/24] 
	I0718 00:02:01.104557       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0718 00:02:01.104586       1 main.go:227] handling current node
	I0718 00:02:01.104597       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0718 00:02:01.104603       1 main.go:250] Node multinode-451668-m02 has CIDR [10.244.1.0/24] 
	I0718 00:02:11.117975       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0718 00:02:11.118026       1 main.go:227] handling current node
	I0718 00:02:11.118040       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0718 00:02:11.118046       1 main.go:250] Node multinode-451668-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [f38376f93d96ae4dff4ad235c074ec6d08ff57448fa949ffccc27333d010de11] <==
	* I0718 00:00:33.823273       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0718 00:00:33.823836       1 aggregator.go:152] initial CRD sync complete...
	I0718 00:00:33.823909       1 autoregister_controller.go:141] Starting autoregister controller
	I0718 00:00:33.823940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0718 00:00:33.823970       1 cache.go:39] Caches are synced for autoregister controller
	I0718 00:00:33.824466       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0718 00:00:33.837072       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0718 00:00:33.839385       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0718 00:00:34.259279       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0718 00:00:34.625889       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0718 00:00:34.630357       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0718 00:00:34.630384       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0718 00:00:35.197209       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0718 00:00:35.243376       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0718 00:00:35.365543       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0718 00:00:35.372375       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0718 00:00:35.373447       1 controller.go:624] quota admission added evaluator for: endpoints
	I0718 00:00:35.378341       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0718 00:00:35.769636       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0718 00:00:37.032630       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0718 00:00:37.047953       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0718 00:00:37.071802       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0718 00:00:49.371648       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0718 00:00:49.697130       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0718 00:02:15.515777       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400d13cba0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400cc5b1d0), ResponseWriter:(*httpsnoop.rw)(0x400cc5b1d0), Flusher:(*httpsnoop.rw)(0x400cc5b1d0), CloseNotifier:(*httpsnoop.rw)(0x400cc5b1d0), Pusher:(*httpsnoop.rw)(0x400cc5b1d0)}}, encoder:(*versioning.codec)(0x400cc42f00), memAllocator:(*runtime.Allocator)(0x400d13e588)})
	
	* 
	* ==> kube-controller-manager [7d62212a6dd6235b3f1471b76a3ae37820f99f9decd051fb2938e29b79144ddb] <==
	* I0718 00:00:48.950029       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-451668" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0718 00:00:48.957502       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-451668" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0718 00:00:48.969672       1 shared_informer.go:318] Caches are synced for resource quota
	I0718 00:00:49.317686       1 shared_informer.go:318] Caches are synced for garbage collector
	I0718 00:00:49.339272       1 shared_informer.go:318] Caches are synced for garbage collector
	I0718 00:00:49.339305       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0718 00:00:49.376670       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0718 00:00:49.659150       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0718 00:00:49.796654       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jcxjg"
	I0718 00:00:49.796698       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7knpj"
	I0718 00:00:49.821978       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-t7k5v"
	I0718 00:00:49.874276       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-qvgbw"
	I0718 00:00:50.640825       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-t7k5v"
	I0718 00:01:23.909784       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0718 00:01:37.654200       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-451668-m02\" does not exist"
	I0718 00:01:37.683303       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-whgc7"
	I0718 00:01:37.683409       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wm797"
	I0718 00:01:37.695572       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-451668-m02" podCIDRs=[10.244.1.0/24]
	I0718 00:01:38.912860       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-451668-m02"
	I0718 00:01:38.912924       1 event.go:307] "Event occurred" object="multinode-451668-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-451668-m02 event: Registered Node multinode-451668-m02 in Controller"
	W0718 00:02:09.398101       1 topologycache.go:232] Can't get CPU or zone information for multinode-451668-m02 node
	I0718 00:02:11.760335       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0718 00:02:11.786507       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-qfp74"
	I0718 00:02:11.810008       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-d4jjr"
	I0718 00:02:13.927595       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-qfp74" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-qfp74"
	
	* 
	* ==> kube-proxy [813d95b28c643df516457e81bed473acf30e2d76c31e8560ec1decc312a5f08d] <==
	* I0718 00:00:50.883248       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0718 00:00:50.883798       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0718 00:00:50.883873       1 server_others.go:554] "Using iptables proxy"
	I0718 00:00:50.950383       1 server_others.go:192] "Using iptables Proxier"
	I0718 00:00:50.950450       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0718 00:00:50.950460       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0718 00:00:50.950474       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0718 00:00:50.950540       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0718 00:00:50.951131       1 server.go:658] "Version info" version="v1.27.3"
	I0718 00:00:50.951150       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0718 00:00:50.953719       1 config.go:188] "Starting service config controller"
	I0718 00:00:50.953746       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0718 00:00:50.953770       1 config.go:97] "Starting endpoint slice config controller"
	I0718 00:00:50.953774       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0718 00:00:50.956851       1 config.go:315] "Starting node config controller"
	I0718 00:00:50.956873       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0718 00:00:51.054142       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0718 00:00:51.054147       1 shared_informer.go:318] Caches are synced for service config
	I0718 00:00:51.057308       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ece5258048d0822afbdb5fcd47681375a26abd4bbb87679c204933a708f33dc7] <==
	* W0718 00:00:33.792406       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0718 00:00:33.792453       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0718 00:00:33.792567       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0718 00:00:33.794907       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0718 00:00:33.795075       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0718 00:00:33.795132       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0718 00:00:33.795244       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0718 00:00:33.795378       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0718 00:00:34.663167       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0718 00:00:34.663467       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0718 00:00:34.872630       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0718 00:00:34.872669       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0718 00:00:34.873800       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0718 00:00:34.873903       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0718 00:00:34.881722       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0718 00:00:34.881827       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0718 00:00:34.928258       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0718 00:00:34.928358       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0718 00:00:34.933483       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0718 00:00:34.933844       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0718 00:00:34.933795       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0718 00:00:34.933993       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0718 00:00:34.959173       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0718 00:00:34.959293       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0718 00:00:37.285750       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.835914    1392 topology_manager.go:212] "Topology Admit Handler"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927809    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99d30dc5-6047-4fb1-abd0-ddb9c8729969-xtables-lock\") pod \"kindnet-jcxjg\" (UID: \"99d30dc5-6047-4fb1-abd0-ddb9c8729969\") " pod="kube-system/kindnet-jcxjg"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927858    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6cebdce-80d9-4b8b-8ea5-415bb18d1f07-kube-proxy\") pod \"kube-proxy-7knpj\" (UID: \"e6cebdce-80d9-4b8b-8ea5-415bb18d1f07\") " pod="kube-system/kube-proxy-7knpj"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927884    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6cebdce-80d9-4b8b-8ea5-415bb18d1f07-lib-modules\") pod \"kube-proxy-7knpj\" (UID: \"e6cebdce-80d9-4b8b-8ea5-415bb18d1f07\") " pod="kube-system/kube-proxy-7knpj"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927908    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99d30dc5-6047-4fb1-abd0-ddb9c8729969-cni-cfg\") pod \"kindnet-jcxjg\" (UID: \"99d30dc5-6047-4fb1-abd0-ddb9c8729969\") " pod="kube-system/kindnet-jcxjg"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927931    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgfp8\" (UniqueName: \"kubernetes.io/projected/99d30dc5-6047-4fb1-abd0-ddb9c8729969-kube-api-access-xgfp8\") pod \"kindnet-jcxjg\" (UID: \"99d30dc5-6047-4fb1-abd0-ddb9c8729969\") " pod="kube-system/kindnet-jcxjg"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927958    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwdd\" (UniqueName: \"kubernetes.io/projected/e6cebdce-80d9-4b8b-8ea5-415bb18d1f07-kube-api-access-jmwdd\") pod \"kube-proxy-7knpj\" (UID: \"e6cebdce-80d9-4b8b-8ea5-415bb18d1f07\") " pod="kube-system/kube-proxy-7knpj"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.927982    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99d30dc5-6047-4fb1-abd0-ddb9c8729969-lib-modules\") pod \"kindnet-jcxjg\" (UID: \"99d30dc5-6047-4fb1-abd0-ddb9c8729969\") " pod="kube-system/kindnet-jcxjg"
	Jul 18 00:00:49 multinode-451668 kubelet[1392]: I0718 00:00:49.928005    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6cebdce-80d9-4b8b-8ea5-415bb18d1f07-xtables-lock\") pod \"kube-proxy-7knpj\" (UID: \"e6cebdce-80d9-4b8b-8ea5-415bb18d1f07\") " pod="kube-system/kube-proxy-7knpj"
	Jul 18 00:00:51 multinode-451668 kubelet[1392]: I0718 00:00:51.357567    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7knpj" podStartSLOduration=2.357525362 podCreationTimestamp="2023-07-18 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-18 00:00:51.341067582 +0000 UTC m=+14.344921090" watchObservedRunningTime="2023-07-18 00:00:51.357525362 +0000 UTC m=+14.361378878"
	Jul 18 00:00:57 multinode-451668 kubelet[1392]: I0718 00:00:57.232523    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-jcxjg" podStartSLOduration=8.232481406 podCreationTimestamp="2023-07-18 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-18 00:00:51.358279036 +0000 UTC m=+14.362132552" watchObservedRunningTime="2023-07-18 00:00:57.232481406 +0000 UTC m=+20.236334922"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.541477    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.578854    1392 topology_manager.go:212] "Topology Admit Handler"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.586658    1392 topology_manager.go:212] "Topology Admit Handler"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.659735    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsfz4\" (UniqueName: \"kubernetes.io/projected/e1ba839b-7dba-4b50-9c64-851459ea7287-kube-api-access-rsfz4\") pod \"storage-provisioner\" (UID: \"e1ba839b-7dba-4b50-9c64-851459ea7287\") " pod="kube-system/storage-provisioner"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.659845    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2a4d36-002a-4117-b0ec-2c58b2b7249b-config-volume\") pod \"coredns-5d78c9869d-qvgbw\" (UID: \"9d2a4d36-002a-4117-b0ec-2c58b2b7249b\") " pod="kube-system/coredns-5d78c9869d-qvgbw"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.659881    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dwh\" (UniqueName: \"kubernetes.io/projected/9d2a4d36-002a-4117-b0ec-2c58b2b7249b-kube-api-access-t8dwh\") pod \"coredns-5d78c9869d-qvgbw\" (UID: \"9d2a4d36-002a-4117-b0ec-2c58b2b7249b\") " pod="kube-system/coredns-5d78c9869d-qvgbw"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: I0718 00:01:21.659906    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e1ba839b-7dba-4b50-9c64-851459ea7287-tmp\") pod \"storage-provisioner\" (UID: \"e1ba839b-7dba-4b50-9c64-851459ea7287\") " pod="kube-system/storage-provisioner"
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: W0718 00:01:21.924080    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/crio-5da8b939344b1b2a3087aad6eae9a14b8b3d9f716e0deb66708ddb5306b7da2f WatchSource:0}: Error finding container 5da8b939344b1b2a3087aad6eae9a14b8b3d9f716e0deb66708ddb5306b7da2f: Status 404 returned error can't find the container with id 5da8b939344b1b2a3087aad6eae9a14b8b3d9f716e0deb66708ddb5306b7da2f
	Jul 18 00:01:21 multinode-451668 kubelet[1392]: W0718 00:01:21.934315    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/crio-b009ea7ae50915fdeba5ed3af462a9c6e5d0bff0b0638ad099606d3d6f0a2f9b WatchSource:0}: Error finding container b009ea7ae50915fdeba5ed3af462a9c6e5d0bff0b0638ad099606d3d6f0a2f9b: Status 404 returned error can't find the container with id b009ea7ae50915fdeba5ed3af462a9c6e5d0bff0b0638ad099606d3d6f0a2f9b
	Jul 18 00:01:22 multinode-451668 kubelet[1392]: I0718 00:01:22.408482    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.408434337 podCreationTimestamp="2023-07-18 00:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-18 00:01:22.395325438 +0000 UTC m=+45.399178954" watchObservedRunningTime="2023-07-18 00:01:22.408434337 +0000 UTC m=+45.412287853"
	Jul 18 00:02:11 multinode-451668 kubelet[1392]: I0718 00:02:11.820812    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-qvgbw" podStartSLOduration=82.820741255 podCreationTimestamp="2023-07-18 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-18 00:01:22.408702092 +0000 UTC m=+45.412555641" watchObservedRunningTime="2023-07-18 00:02:11.820741255 +0000 UTC m=+94.824594771"
	Jul 18 00:02:11 multinode-451668 kubelet[1392]: I0718 00:02:11.821237    1392 topology_manager.go:212] "Topology Admit Handler"
	Jul 18 00:02:11 multinode-451668 kubelet[1392]: I0718 00:02:11.959397    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgjhw\" (UniqueName: \"kubernetes.io/projected/1312769c-96cf-47d4-8989-7e0e7d0a1e1a-kube-api-access-xgjhw\") pod \"busybox-67b7f59bb-d4jjr\" (UID: \"1312769c-96cf-47d4-8989-7e0e7d0a1e1a\") " pod="default/busybox-67b7f59bb-d4jjr"
	Jul 18 00:02:12 multinode-451668 kubelet[1392]: W0718 00:02:12.178580    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/crio-08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9 WatchSource:0}: Error finding container 08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9: Status 404 returned error can't find the container with id 08749841a4efe21132b5f1bfd90007bfd9083820dcd56c989bc5634d92077ea9
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-451668 -n multinode-451668
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-451668 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (104.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.1649988919.exe start -p running-upgrade-401040 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.1649988919.exe start -p running-upgrade-401040 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m35.265692775s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-401040 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-401040 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.771282257s)

                                                
                                                
-- stdout --
	* [running-upgrade-401040] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-401040 in cluster running-upgrade-401040
	* Pulling base image ...
	* Updating the running docker "running-upgrade-401040" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 00:24:11.040772 1952203 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:24:11.040952 1952203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:24:11.040962 1952203 out.go:309] Setting ErrFile to fd 2...
	I0718 00:24:11.040968 1952203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:24:11.041245 1952203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:24:11.041622 1952203 out.go:303] Setting JSON to false
	I0718 00:24:11.044628 1952203 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32795,"bootTime":1689607056,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0718 00:24:11.045302 1952203 start.go:138] virtualization:  
	I0718 00:24:11.048092 1952203 out.go:177] * [running-upgrade-401040] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0718 00:24:11.049843 1952203 out.go:177]   - MINIKUBE_LOCATION=16899
	I0718 00:24:11.051584 1952203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 00:24:11.050192 1952203 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0718 00:24:11.050239 1952203 notify.go:220] Checking for updates...
	I0718 00:24:11.055344 1952203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:24:11.057208 1952203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0718 00:24:11.059073 1952203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0718 00:24:11.060804 1952203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 00:24:11.063104 1952203 config.go:182] Loaded profile config "running-upgrade-401040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:24:11.066914 1952203 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0718 00:24:11.069059 1952203 driver.go:373] Setting default libvirt URI to qemu:///system
	I0718 00:24:11.125858 1952203 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0718 00:24:11.126093 1952203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:24:11.309360 1952203 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0718 00:24:11.331522 1952203 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-18 00:24:11.317983744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:24:11.331626 1952203 docker.go:294] overlay module found
	I0718 00:24:11.333610 1952203 out.go:177] * Using the docker driver based on existing profile
	I0718 00:24:11.335307 1952203 start.go:298] selected driver: docker
	I0718 00:24:11.335325 1952203 start.go:880] validating driver "docker" against &{Name:running-upgrade-401040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-401040 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.145 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:24:11.335432 1952203 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 00:24:11.336225 1952203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:24:11.473736 1952203 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-18 00:24:11.461755084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:24:11.474049 1952203 cni.go:84] Creating CNI manager for ""
	I0718 00:24:11.474059 1952203 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0718 00:24:11.474069 1952203 start_flags.go:319] config:
	{Name:running-upgrade-401040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-401040 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.145 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:24:11.476116 1952203 out.go:177] * Starting control plane node running-upgrade-401040 in cluster running-upgrade-401040
	I0718 00:24:11.477816 1952203 cache.go:122] Beginning downloading kic base image for docker with crio
	I0718 00:24:11.479426 1952203 out.go:177] * Pulling base image ...
	I0718 00:24:11.481013 1952203 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0718 00:24:11.481176 1952203 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0718 00:24:11.520456 1952203 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0718 00:24:11.520492 1952203 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0718 00:24:11.553460 1952203 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0718 00:24:11.553617 1952203 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/running-upgrade-401040/config.json ...
	I0718 00:24:11.553857 1952203 cache.go:195] Successfully downloaded all kic artifacts
	I0718 00:24:11.553907 1952203 start.go:365] acquiring machines lock for running-upgrade-401040: {Name:mk846992370d00173615a6d896dffb19840361f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.553958 1952203 start.go:369] acquired machines lock for "running-upgrade-401040" in 33.115µs
	I0718 00:24:11.553973 1952203 start.go:96] Skipping create...Using existing machine configuration
	I0718 00:24:11.553978 1952203 fix.go:54] fixHost starting: 
	I0718 00:24:11.554268 1952203 cli_runner.go:164] Run: docker container inspect running-upgrade-401040 --format={{.State.Status}}
	I0718 00:24:11.554630 1952203 cache.go:107] acquiring lock: {Name:mkf3adb8fce5e1fb5ae0829224518143650ee450 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554699 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 00:24:11.554707 1952203 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.419µs
	I0718 00:24:11.554716 1952203 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 00:24:11.554724 1952203 cache.go:107] acquiring lock: {Name:mkb4d63214113931a675a1f85c29d2bb8e46d535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554757 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0718 00:24:11.554762 1952203 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 39.745µs
	I0718 00:24:11.554769 1952203 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0718 00:24:11.554775 1952203 cache.go:107] acquiring lock: {Name:mk87fca18702787d10d8299b31d75f1bf5b34273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554802 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0718 00:24:11.554806 1952203 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.131µs
	I0718 00:24:11.554813 1952203 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0718 00:24:11.554820 1952203 cache.go:107] acquiring lock: {Name:mk8d356e5a8e8ecd7c773e8f4561e6eda01a0db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554851 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0718 00:24:11.554855 1952203 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 36.873µs
	I0718 00:24:11.554861 1952203 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0718 00:24:11.554873 1952203 cache.go:107] acquiring lock: {Name:mk0bcf2410225d8090ba3303a181548f1c4c100a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554897 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0718 00:24:11.554901 1952203 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 32.303µs
	I0718 00:24:11.554907 1952203 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0718 00:24:11.554915 1952203 cache.go:107] acquiring lock: {Name:mkabc6d6057401f474cbacab1c730a5ff7e2d6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554949 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0718 00:24:11.554953 1952203 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 39.22µs
	I0718 00:24:11.554959 1952203 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0718 00:24:11.554970 1952203 cache.go:107] acquiring lock: {Name:mk1437ec8761366896c8ceb88aba8606743914b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.554994 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0718 00:24:11.554998 1952203 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 29.628µs
	I0718 00:24:11.555004 1952203 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0718 00:24:11.555015 1952203 cache.go:107] acquiring lock: {Name:mk9a6d3366467c89c6960be774c9154764df9767 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:11.555039 1952203 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0718 00:24:11.555044 1952203 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 31.737µs
	I0718 00:24:11.555049 1952203 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0718 00:24:11.555055 1952203 cache.go:87] Successfully saved all images to host disk.
	I0718 00:24:11.586506 1952203 fix.go:102] recreateIfNeeded on running-upgrade-401040: state=Running err=<nil>
	W0718 00:24:11.586545 1952203 fix.go:128] unexpected machine state, will restart: <nil>
	I0718 00:24:11.589602 1952203 out.go:177] * Updating the running docker "running-upgrade-401040" container ...
	I0718 00:24:11.591203 1952203 machine.go:88] provisioning docker machine ...
	I0718 00:24:11.591259 1952203 ubuntu.go:169] provisioning hostname "running-upgrade-401040"
	I0718 00:24:11.591338 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:11.622915 1952203 main.go:141] libmachine: Using SSH client type: native
	I0718 00:24:11.623378 1952203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34866 <nil> <nil>}
	I0718 00:24:11.623397 1952203 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-401040 && echo "running-upgrade-401040" | sudo tee /etc/hostname
	I0718 00:24:11.811286 1952203 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-401040
	
	I0718 00:24:11.811363 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:11.830751 1952203 main.go:141] libmachine: Using SSH client type: native
	I0718 00:24:11.831187 1952203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34866 <nil> <nil>}
	I0718 00:24:11.831209 1952203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-401040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-401040/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-401040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 00:24:11.975630 1952203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 00:24:11.975685 1952203 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0718 00:24:11.975717 1952203 ubuntu.go:177] setting up certificates
	I0718 00:24:11.975727 1952203 provision.go:83] configureAuth start
	I0718 00:24:11.975872 1952203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-401040
	I0718 00:24:12.014936 1952203 provision.go:138] copyHostCerts
	I0718 00:24:12.015015 1952203 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0718 00:24:12.015035 1952203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:24:12.015121 1952203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0718 00:24:12.015231 1952203 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0718 00:24:12.015241 1952203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:24:12.015270 1952203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0718 00:24:12.015336 1952203 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0718 00:24:12.015345 1952203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:24:12.015371 1952203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0718 00:24:12.015430 1952203 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-401040 san=[192.168.59.145 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-401040]
	I0718 00:24:12.213200 1952203 provision.go:172] copyRemoteCerts
	I0718 00:24:12.213275 1952203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 00:24:12.213327 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:12.231450 1952203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34866 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/running-upgrade-401040/id_rsa Username:docker}
	I0718 00:24:12.332688 1952203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 00:24:12.363653 1952203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0718 00:24:12.388493 1952203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 00:24:12.415905 1952203 provision.go:86] duration metric: configureAuth took 440.128329ms
	I0718 00:24:12.415930 1952203 ubuntu.go:193] setting minikube options for container-runtime
	I0718 00:24:12.416108 1952203 config.go:182] Loaded profile config "running-upgrade-401040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:24:12.416216 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:12.441311 1952203 main.go:141] libmachine: Using SSH client type: native
	I0718 00:24:12.441787 1952203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34866 <nil> <nil>}
	I0718 00:24:12.441807 1952203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0718 00:24:13.122625 1952203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0718 00:24:13.122647 1952203 machine.go:91] provisioned docker machine in 1.53142083s
	I0718 00:24:13.122658 1952203 start.go:300] post-start starting for "running-upgrade-401040" (driver="docker")
	I0718 00:24:13.122668 1952203 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 00:24:13.122737 1952203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 00:24:13.122801 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:13.149990 1952203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34866 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/running-upgrade-401040/id_rsa Username:docker}
	I0718 00:24:13.253204 1952203 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 00:24:13.257263 1952203 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 00:24:13.257285 1952203 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 00:24:13.257296 1952203 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 00:24:13.257302 1952203 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0718 00:24:13.257312 1952203 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0718 00:24:13.257367 1952203 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0718 00:24:13.257447 1952203 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0718 00:24:13.257554 1952203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 00:24:13.266376 1952203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:24:13.292275 1952203 start.go:303] post-start completed in 169.601212ms
	I0718 00:24:13.292415 1952203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:24:13.292499 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:13.319637 1952203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34866 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/running-upgrade-401040/id_rsa Username:docker}
	I0718 00:24:13.417246 1952203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 00:24:13.423016 1952203 fix.go:56] fixHost completed within 1.86903146s
	I0718 00:24:13.423039 1952203 start.go:83] releasing machines lock for "running-upgrade-401040", held for 1.869072641s
	I0718 00:24:13.423113 1952203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-401040
	I0718 00:24:13.441953 1952203 ssh_runner.go:195] Run: cat /version.json
	I0718 00:24:13.442008 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:13.442259 1952203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 00:24:13.442335 1952203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-401040
	I0718 00:24:13.475162 1952203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34866 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/running-upgrade-401040/id_rsa Username:docker}
	I0718 00:24:13.476907 1952203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34866 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/running-upgrade-401040/id_rsa Username:docker}
	W0718 00:24:13.571401 1952203 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0718 00:24:13.571499 1952203 ssh_runner.go:195] Run: systemctl --version
	I0718 00:24:13.659495 1952203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0718 00:24:13.792646 1952203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 00:24:13.798661 1952203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:24:13.843064 1952203 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0718 00:24:13.843137 1952203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:24:13.877489 1952203 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 00:24:13.877510 1952203 start.go:466] detecting cgroup driver to use...
	I0718 00:24:13.877540 1952203 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0718 00:24:13.877591 1952203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 00:24:13.910502 1952203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 00:24:13.923842 1952203 docker.go:196] disabling cri-docker service (if available) ...
	I0718 00:24:13.923983 1952203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0718 00:24:13.962547 1952203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0718 00:24:13.989408 1952203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0718 00:24:14.054925 1952203 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0718 00:24:14.054993 1952203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0718 00:24:14.476650 1952203 docker.go:212] disabling docker service ...
	I0718 00:24:14.476723 1952203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0718 00:24:14.541875 1952203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0718 00:24:14.589657 1952203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0718 00:24:15.312455 1952203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0718 00:24:15.623969 1952203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0718 00:24:15.653286 1952203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 00:24:15.697694 1952203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0718 00:24:15.697767 1952203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:24:15.714699 1952203 out.go:177] 
	W0718 00:24:15.716463 1952203 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0718 00:24:15.716640 1952203 out.go:239] * 
	* 
	W0718 00:24:15.718207 1952203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 00:24:15.720660 1952203 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-401040 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-18 00:24:15.761395602 +0000 UTC m=+2814.479465942
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-401040
helpers_test.go:235: (dbg) docker inspect running-upgrade-401040:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "893528f0c38d6fb1b29fd01ac1db54a56d36ffb00dc36a082fcacea7a2fa4496",
	        "Created": "2023-07-18T00:22:47.916998905Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1945148,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-18T00:22:48.745678604Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/893528f0c38d6fb1b29fd01ac1db54a56d36ffb00dc36a082fcacea7a2fa4496/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/893528f0c38d6fb1b29fd01ac1db54a56d36ffb00dc36a082fcacea7a2fa4496/hostname",
	        "HostsPath": "/var/lib/docker/containers/893528f0c38d6fb1b29fd01ac1db54a56d36ffb00dc36a082fcacea7a2fa4496/hosts",
	        "LogPath": "/var/lib/docker/containers/893528f0c38d6fb1b29fd01ac1db54a56d36ffb00dc36a082fcacea7a2fa4496/893528f0c38d6fb1b29fd01ac1db54a56d36ffb00dc36a082fcacea7a2fa4496-json.log",
	        "Name": "/running-upgrade-401040",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-401040:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-401040",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55ea6168d2bca328b9826d11b8afee6cc25c6ea4da6cc6051cb9daa967fe0891-init/diff:/var/lib/docker/overlay2/e8f3520c062cbac8f4631e1fcebc75cec97053cbc5d26b4c8dd05cb03dd26b9f/diff:/var/lib/docker/overlay2/793ee46f34d3b51c5b900c6ec2416c6666480fdafde073ff15a2eff1505c8e09/diff:/var/lib/docker/overlay2/83549c5070cad484ad09e2c17aaa5aa218411e7afdcc9ba663c8deed5cbeed03/diff:/var/lib/docker/overlay2/5ea1c9ee855ad95c20a0e451f2aad91c10e3987a4e249a27c47033f42901a759/diff:/var/lib/docker/overlay2/4a1e68ddfa6873b033eb8dd336afbada395b79e2caf6480ac49810f62a2777a6/diff:/var/lib/docker/overlay2/d00c85a1df167208c1eacb3fd3e1b87d552868c1dfce71f747ae5b0a2e371e54/diff:/var/lib/docker/overlay2/2f8d0c26778ef928fe13459abb1cee824232c1d3f786ceb715bf721f29916723/diff:/var/lib/docker/overlay2/f54d778fcf9fd2e754dde7b1f359cedc4a02e51cae0adf252b4c7644f7ab6e52/diff:/var/lib/docker/overlay2/623215d077707760cbd2323baae16e79345a653d16abb4e5a156caae4d5c83f3/diff:/var/lib/docker/overlay2/3006f1
7882d749d0f13570378ef4ae6acd949242e553f66af92d25baa2324c50/diff:/var/lib/docker/overlay2/74e79838c2797221007fb2727eeffdf50d840bcdb6eb27c0abc191c7bd522049/diff:/var/lib/docker/overlay2/921888b75d425f05a5484d114b7041987181d84efed74ff45a55d06c5fa9b533/diff:/var/lib/docker/overlay2/3422c4c8f953f5c25d787fdb8d952f329d4c3a80b2518f3cec621fd50f911cc0/diff:/var/lib/docker/overlay2/524f27cc7fe4dd0e3abb281ba7287a0f9117d8f578071dd2c3c0303e075fd98c/diff:/var/lib/docker/overlay2/8a59878375fbd3112820cee6a5863adf148af470d528b788e73be87834fac91c/diff:/var/lib/docker/overlay2/a549627c77e380c37082b7b7a15bfbe194240033c4402568303c623304d3dabc/diff:/var/lib/docker/overlay2/c7792f75ad1dddf6ce493bfe78ab2ba912145e62484e495529b9d2c16d67bae8/diff:/var/lib/docker/overlay2/9f8912c5c52319bc01aa96c322c204285fe465cf9f68a5aeba11a9acbefdd186/diff:/var/lib/docker/overlay2/1724530b3c0b1a1d4c2aa5906d7567eb7216dde9b0298b78c14e24a2c91a9b0c/diff:/var/lib/docker/overlay2/65baae65ba01917dd80e039bee7ea510716d27b7db5f6f31417b79395c0388a3/diff:/var/lib/d
ocker/overlay2/fb636680aeaef5f875cc2a1d0c023cf3877fbe00f75828aaeea087687e25e25a/diff:/var/lib/docker/overlay2/2e3c4e5bb7f321ba9984cbec708aba1f2297a8556ca406f8794cffdd7bc0acbf/diff:/var/lib/docker/overlay2/595d9c5e6a7eb460d52c1f2c997dbbc8f01dbd7e121de82952c5c51a53ced730/diff:/var/lib/docker/overlay2/7bea71b9cc12ff0aaebc5106c9ad3239ecc9956b5924a834e102795040639d66/diff:/var/lib/docker/overlay2/40e7fd7cb12c31fa4558e2574a5f10890588e391cadc8dbc5f117ca05e872601/diff:/var/lib/docker/overlay2/7f2d7ec3178f6f9f1b55caa996834543c3c0ec556a36cd33e8422edb058888a0/diff:/var/lib/docker/overlay2/c0e4d51600ca1ab9e3d4f9a935d1b3737219b5d9406227847281e334b65fc344/diff:/var/lib/docker/overlay2/8f7e058bf3f63e124338ad660a78fc2696e6bdac680821efeb95112f63be6b9f/diff:/var/lib/docker/overlay2/5d5dfc4fa683c2144d5e35e02d67b656b2e567f08098c007e91b5d0e042b09d1/diff:/var/lib/docker/overlay2/f4163654836a3e34c232396d591e8a6737f72ecc66fae5d435a51fd932c8c9db/diff:/var/lib/docker/overlay2/b2bec3e686b7be1acfb2d166633fdfd64c3028786ddbb473306d34249a6
e5406/diff:/var/lib/docker/overlay2/72bed38f9a14f6c0eb1a2ff0d41a82df3b49f06c531f554650fea34095d43d50/diff:/var/lib/docker/overlay2/258e925377794f47020c326f5fb535623488ff1fa9d149f355bc4980c2af5ca0/diff:/var/lib/docker/overlay2/d43811e11f3424679b1b7872e580b673ef9d85b0934e4eb488e2a6f8890989e3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55ea6168d2bca328b9826d11b8afee6cc25c6ea4da6cc6051cb9daa967fe0891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55ea6168d2bca328b9826d11b8afee6cc25c6ea4da6cc6051cb9daa967fe0891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55ea6168d2bca328b9826d11b8afee6cc25c6ea4da6cc6051cb9daa967fe0891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-401040",
	                "Source": "/var/lib/docker/volumes/running-upgrade-401040/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-401040",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-401040",
	                "name.minikube.sigs.k8s.io": "running-upgrade-401040",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bfbc3405e9329c972151705ef693769d417dbd7290bc9f0da34067c59d3eadf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34866"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34865"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34864"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34863"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0bfbc3405e93",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-401040": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.145"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "893528f0c38d",
	                        "running-upgrade-401040"
	                    ],
	                    "NetworkID": "fb4c61dc73c20d6e272fd2d2ad70d09752789af1a7a4c8c0e4a85bf1347a53b4",
	                    "EndpointID": "1af9b97cb5cf86c10f3028b18b02f7013c3b0f0e73260294f7fb1f65b66ca7e8",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.145",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:91",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-401040 -n running-upgrade-401040
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-401040 -n running-upgrade-401040: exit status 4 (698.774186ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 00:24:16.432512 1953157 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-401040" does not appear in /home/jenkins/minikube-integration/16899-1800837/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-401040" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-401040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-401040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-401040: (2.947198651s)
--- FAIL: TestRunningBinaryUpgrade (104.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (134.02s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.1789027928.exe start -p missing-upgrade-387571 --memory=2200 --driver=docker  --container-runtime=crio
E0718 00:20:42.852364 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.1789027928.exe start -p missing-upgrade-387571 --memory=2200 --driver=docker  --container-runtime=crio: (1m30.625711405s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-387571
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-387571: (1.612113206s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-387571
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-387571 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-387571 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (37.60171274s)

                                                
                                                
-- stdout --
	* [missing-upgrade-387571] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-387571 in cluster missing-upgrade-387571
	* Pulling base image ...
	* docker "missing-upgrade-387571" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 00:21:54.266055 1940237 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:21:54.266265 1940237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:21:54.266289 1940237 out.go:309] Setting ErrFile to fd 2...
	I0718 00:21:54.266307 1940237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:21:54.266616 1940237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:21:54.267467 1940237 out.go:303] Setting JSON to false
	I0718 00:21:54.269225 1940237 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32659,"bootTime":1689607056,"procs":366,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0718 00:21:54.269326 1940237 start.go:138] virtualization:  
	I0718 00:21:54.273298 1940237 out.go:177] * [missing-upgrade-387571] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0718 00:21:54.277386 1940237 out.go:177]   - MINIKUBE_LOCATION=16899
	I0718 00:21:54.278297 1940237 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0718 00:21:54.292610 1940237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 00:21:54.279643 1940237 notify.go:220] Checking for updates...
	I0718 00:21:54.296746 1940237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:21:54.298740 1940237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0718 00:21:54.300497 1940237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0718 00:21:54.302673 1940237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 00:21:54.307359 1940237 config.go:182] Loaded profile config "missing-upgrade-387571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:21:54.310583 1940237 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0718 00:21:54.314611 1940237 driver.go:373] Setting default libvirt URI to qemu:///system
	I0718 00:21:54.360451 1940237 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0718 00:21:54.360928 1940237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:21:54.428996 1940237 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0718 00:21:54.526984 1940237 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-18 00:21:54.515431058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:21:54.527095 1940237 docker.go:294] overlay module found
	I0718 00:21:54.530271 1940237 out.go:177] * Using the docker driver based on existing profile
	I0718 00:21:54.531898 1940237 start.go:298] selected driver: docker
	I0718 00:21:54.532360 1940237 start.go:880] validating driver "docker" against &{Name:missing-upgrade-387571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-387571 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.244 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:21:54.532482 1940237 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 00:21:54.533624 1940237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:21:54.630611 1940237 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-18 00:21:54.614519327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:21:54.630931 1940237 cni.go:84] Creating CNI manager for ""
	I0718 00:21:54.630949 1940237 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0718 00:21:54.631446 1940237 start_flags.go:319] config:
	{Name:missing-upgrade-387571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-387571 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.244 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:21:54.633617 1940237 out.go:177] * Starting control plane node missing-upgrade-387571 in cluster missing-upgrade-387571
	I0718 00:21:54.635633 1940237 cache.go:122] Beginning downloading kic base image for docker with crio
	I0718 00:21:54.637508 1940237 out.go:177] * Pulling base image ...
	I0718 00:21:54.639271 1940237 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0718 00:21:54.639796 1940237 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0718 00:21:54.667079 1940237 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0718 00:21:54.668073 1940237 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0718 00:21:54.668576 1940237 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0718 00:21:54.696434 1940237 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0718 00:21:54.696597 1940237 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/missing-upgrade-387571/config.json ...
	I0718 00:21:54.698602 1940237 cache.go:107] acquiring lock: {Name:mkf3adb8fce5e1fb5ae0829224518143650ee450 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.698704 1940237 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 00:21:54.698717 1940237 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.175µs
	I0718 00:21:54.698736 1940237 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 00:21:54.698746 1940237 cache.go:107] acquiring lock: {Name:mkb4d63214113931a675a1f85c29d2bb8e46d535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.698845 1940237 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0718 00:21:54.698980 1940237 cache.go:107] acquiring lock: {Name:mk0bcf2410225d8090ba3303a181548f1c4c100a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.699078 1940237 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0718 00:21:54.699473 1940237 cache.go:107] acquiring lock: {Name:mkabc6d6057401f474cbacab1c730a5ff7e2d6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.699566 1940237 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0718 00:21:54.699656 1940237 cache.go:107] acquiring lock: {Name:mk1437ec8761366896c8ceb88aba8606743914b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.699726 1940237 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0718 00:21:54.699894 1940237 cache.go:107] acquiring lock: {Name:mk9a6d3366467c89c6960be774c9154764df9767 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.699972 1940237 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0718 00:21:54.700174 1940237 cache.go:107] acquiring lock: {Name:mk87fca18702787d10d8299b31d75f1bf5b34273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.700257 1940237 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0718 00:21:54.700351 1940237 cache.go:107] acquiring lock: {Name:mk8d356e5a8e8ecd7c773e8f4561e6eda01a0db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:21:54.700411 1940237 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0718 00:21:54.706543 1940237 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0718 00:21:54.707773 1940237 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0718 00:21:54.706510 1940237 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0718 00:21:54.708536 1940237 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0718 00:21:54.708796 1940237 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0718 00:21:54.709332 1940237 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0718 00:21:54.712852 1940237 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0718 00:21:55.188861 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0718 00:21:55.190032 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I0718 00:21:55.194245 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0718 00:21:55.194887 1940237 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0718 00:21:55.194944 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W0718 00:21:55.230541 1940237 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0718 00:21:55.230650 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W0718 00:21:55.260181 1940237 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0718 00:21:55.260274 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0718 00:21:55.278822 1940237 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0718 00:21:55.298845 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0718 00:21:55.298906 1940237 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 599.415875ms
	I0718 00:21:55.298929 1940237 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  1.27 MiB / 287.99 MiB [>_] 0.44% ? p/s ?I0718 00:21:55.729369 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0718 00:21:55.729409 1940237 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.029516986s
	I0718 00:21:55.729421 1940237 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  18.13 MiB / 287.99 MiB [>] 6.29% ? p/s ?I0718 00:21:55.940584 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0718 00:21:55.940611 1940237 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.240260815s
	I0718 00:21:55.940624 1940237 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.15 MiB I0718 00:21:56.067334 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0718 00:21:56.067425 1940237 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.367252115s
	I0718 00:21:56.067454 1940237 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.15 MiB I0718 00:21:56.272833 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0718 00:21:56.272859 1940237 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.574112315s
	I0718 00:21:56.272892 1940237 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 43.15 MiB     > gcr.io/k8s-minikube/kicbase...:  30.07 MiB / 287.99 MiB  10.44% 40.81 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.81 MiB    > gcr.io/k8s-minikube/kicbase...:  48.70 MiB / 287.99 MiB  16.91% 40.81 MiBI0718 00:21:57.001963 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0718 00:21:57.002000 1940237 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.303023997s
	I0718 00:21:57.002014 1940237 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 42.24 MiB    > gcr.io/k8s-minikube/kicbase...:  68.07 MiB / 287.99 MiB  23.64% 42.24 MiB    > gcr.io/k8s-minikube/kicbase...:  85.86 MiB / 287.99 MiB  29.81% 42.24 MiB    > gcr.io/k8s-minikube/kicbase...:  104.84 MiB / 287.99 MiB  36.40% 43.49 Mi    > gcr.io/k8s-minikube/kicbase...:  126.11 MiB / 287.99 MiB  43.79% 43.49 Mi    > gcr.io/k8s-minikube/kicbase...:  149.57 MiB / 287.99 MiB  51.94% 43.49 MiI0718 00:21:58.330982 1940237 cache.go:157] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0718 00:21:58.331005 1940237 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.631349759s
	I0718 00:21:58.331018 1940237 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0718 00:21:58.331042 1940237 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  170.82 MiB / 287.99 MiB  59.32% 47.79 Mi    > gcr.io/k8s-minikube/kicbase...:  172.63 MiB / 287.99 MiB  59.94% 47.79 Mi    > gcr.io/k8s-minikube/kicbase...:  195.46 MiB / 287.99 MiB  67.87% 47.79 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 48.88 Mi    > gcr.io/k8s-minikube/kicbase...:  219.29 MiB / 287.99 MiB  76.15% 48.88 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 48.88 Mi    > gcr.io/k8s-minikube/kicbase...:  238.96 MiB / 287.99 MiB  82.98% 48.88 Mi    > gcr.io/k8s-minikube/kicbase...:  254.81 MiB / 287.99 MiB  88.48% 48.88 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 48.88 Mi    > gcr.io/k8s-minikube/kicbase...:  275.76 MiB / 287.99 MiB  95.75% 49.68 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 49.68 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 49.68 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.
99% 47.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 44.71 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 44.71 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 44.71 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 44.57 MI0718 00:22:01.857013 1940237 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0718 00:22:01.857028 1940237 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0718 00:22:03.817677 1940237 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0718 00:22:03.817716 1940237 cache.go:195] Successfully downloaded all kic artifacts
	I0718 00:22:03.818328 1940237 start.go:365] acquiring machines lock for missing-upgrade-387571: {Name:mk9e8c2f11ced3ca26133745159395e844afe83c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:22:03.818462 1940237 start.go:369] acquired machines lock for "missing-upgrade-387571" in 87.137µs
	I0718 00:22:03.818493 1940237 start.go:96] Skipping create...Using existing machine configuration
	I0718 00:22:03.818502 1940237 fix.go:54] fixHost starting: 
	I0718 00:22:03.818798 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:03.835723 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:03.835793 1940237 fix.go:102] recreateIfNeeded on missing-upgrade-387571: state= err=unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:03.835818 1940237 fix.go:107] machineExists: false. err=machine does not exist
	I0718 00:22:03.838974 1940237 out.go:177] * docker "missing-upgrade-387571" container is missing, will recreate.
	I0718 00:22:03.840750 1940237 delete.go:124] DEMOLISHING missing-upgrade-387571 ...
	I0718 00:22:03.840857 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:03.858282 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	W0718 00:22:03.858343 1940237 stop.go:75] unable to get state: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:03.858363 1940237 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:03.858869 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:03.875900 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:03.875980 1940237 delete.go:82] Unable to get host status for missing-upgrade-387571, assuming it has already been deleted: state: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:03.876050 1940237 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-387571
	W0718 00:22:03.892596 1940237 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-387571 returned with exit code 1
	I0718 00:22:03.892629 1940237 kic.go:367] could not find the container missing-upgrade-387571 to remove it. will try anyways
	I0718 00:22:03.892688 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:03.909758 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	W0718 00:22:03.909817 1940237 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:03.909898 1940237 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-387571 /bin/bash -c "sudo init 0"
	W0718 00:22:03.929652 1940237 cli_runner.go:211] docker exec --privileged -t missing-upgrade-387571 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 00:22:03.929686 1940237 oci.go:647] error shutdown missing-upgrade-387571: docker exec --privileged -t missing-upgrade-387571 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:04.929882 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:04.953297 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:04.953361 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:04.953375 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:04.953406 1940237 retry.go:31] will retry after 621.457531ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:05.575048 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:05.593261 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:05.593321 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:05.593334 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:05.593360 1940237 retry.go:31] will retry after 520.114947ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:06.114084 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:06.135313 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:06.135382 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:06.135398 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:06.135423 1940237 retry.go:31] will retry after 1.062590826s: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:07.198259 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:07.217198 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:07.217262 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:07.217281 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:07.217310 1940237 retry.go:31] will retry after 906.395845ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:08.124456 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:08.144996 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:08.145082 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:08.145097 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:08.145121 1940237 retry.go:31] will retry after 1.337192676s: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:09.483343 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:09.501634 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:09.501705 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:09.501718 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:09.501742 1940237 retry.go:31] will retry after 5.461921838s: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:14.963932 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:14.981214 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:14.981282 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:14.981294 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:14.981317 1940237 retry.go:31] will retry after 7.275070499s: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:22.259265 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:22.280763 1940237 cli_runner.go:211] docker container inspect missing-upgrade-387571 --format={{.State.Status}} returned with exit code 1
	I0718 00:22:22.280826 1940237 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	I0718 00:22:22.280836 1940237 oci.go:661] temporary error: container missing-upgrade-387571 status is  but expect it to be exited
	I0718 00:22:22.280866 1940237 oci.go:88] couldn't shut down missing-upgrade-387571 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-387571": docker container inspect missing-upgrade-387571 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-387571
	 
	I0718 00:22:22.280925 1940237 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-387571
	I0718 00:22:22.298874 1940237 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-387571
	W0718 00:22:22.316883 1940237 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-387571 returned with exit code 1
	I0718 00:22:22.316966 1940237 cli_runner.go:164] Run: docker network inspect missing-upgrade-387571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 00:22:22.335635 1940237 cli_runner.go:164] Run: docker network rm missing-upgrade-387571
	I0718 00:22:22.431560 1940237 fix.go:114] Sleeping 1 second for extra luck!
	I0718 00:22:23.431737 1940237 start.go:125] createHost starting for "" (driver="docker")
	I0718 00:22:23.434710 1940237 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 00:22:23.434883 1940237 start.go:159] libmachine.API.Create for "missing-upgrade-387571" (driver="docker")
	I0718 00:22:23.434899 1940237 client.go:168] LocalClient.Create starting
	I0718 00:22:23.434963 1940237 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem
	I0718 00:22:23.434997 1940237 main.go:141] libmachine: Decoding PEM data...
	I0718 00:22:23.435011 1940237 main.go:141] libmachine: Parsing certificate...
	I0718 00:22:23.435071 1940237 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem
	I0718 00:22:23.435088 1940237 main.go:141] libmachine: Decoding PEM data...
	I0718 00:22:23.435098 1940237 main.go:141] libmachine: Parsing certificate...
	I0718 00:22:23.435332 1940237 cli_runner.go:164] Run: docker network inspect missing-upgrade-387571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 00:22:23.457349 1940237 cli_runner.go:211] docker network inspect missing-upgrade-387571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 00:22:23.457420 1940237 network_create.go:281] running [docker network inspect missing-upgrade-387571] to gather additional debugging logs...
	I0718 00:22:23.457436 1940237 cli_runner.go:164] Run: docker network inspect missing-upgrade-387571
	W0718 00:22:23.475868 1940237 cli_runner.go:211] docker network inspect missing-upgrade-387571 returned with exit code 1
	I0718 00:22:23.475895 1940237 network_create.go:284] error running [docker network inspect missing-upgrade-387571]: docker network inspect missing-upgrade-387571: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-387571 not found
	I0718 00:22:23.475908 1940237 network_create.go:286] output of [docker network inspect missing-upgrade-387571]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-387571 not found
	
	** /stderr **
	I0718 00:22:23.475975 1940237 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 00:22:23.495265 1940237 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a9366c9ca7aa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:79:c8:ee:aa} reservation:<nil>}
	I0718 00:22:23.496274 1940237 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-36f82de40cc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:70:9b:c2:68} reservation:<nil>}
	I0718 00:22:23.496757 1940237 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001affc60}
	I0718 00:22:23.496773 1940237 network_create.go:123] attempt to create docker network missing-upgrade-387571 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0718 00:22:23.496836 1940237 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-387571 missing-upgrade-387571
	I0718 00:22:23.577097 1940237 network_create.go:107] docker network missing-upgrade-387571 192.168.67.0/24 created
	I0718 00:22:23.577624 1940237 kic.go:117] calculated static IP "192.168.67.2" for the "missing-upgrade-387571" container
	I0718 00:22:23.577726 1940237 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 00:22:23.595656 1940237 cli_runner.go:164] Run: docker volume create missing-upgrade-387571 --label name.minikube.sigs.k8s.io=missing-upgrade-387571 --label created_by.minikube.sigs.k8s.io=true
	I0718 00:22:23.613630 1940237 oci.go:103] Successfully created a docker volume missing-upgrade-387571
	I0718 00:22:23.613723 1940237 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-387571-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-387571 --entrypoint /usr/bin/test -v missing-upgrade-387571:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0718 00:22:24.170841 1940237 oci.go:107] Successfully prepared a docker volume missing-upgrade-387571
	I0718 00:22:24.170866 1940237 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0718 00:22:24.171380 1940237 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0718 00:22:24.171510 1940237 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0718 00:22:24.296131 1940237 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-387571 --name missing-upgrade-387571 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-387571 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-387571 --network missing-upgrade-387571 --ip 192.168.67.2 --volume missing-upgrade-387571:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0718 00:22:24.731578 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Running}}
	I0718 00:22:24.759404 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	I0718 00:22:24.787549 1940237 cli_runner.go:164] Run: docker exec missing-upgrade-387571 stat /var/lib/dpkg/alternatives/iptables
	I0718 00:22:24.872965 1940237 oci.go:144] the created container "missing-upgrade-387571" has a running status.
	I0718 00:22:24.872991 1940237 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa...
	I0718 00:22:26.023100 1940237 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0718 00:22:26.068778 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	I0718 00:22:26.105551 1940237 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0718 00:22:26.105571 1940237 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-387571 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0718 00:22:26.193177 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	I0718 00:22:26.220073 1940237 machine.go:88] provisioning docker machine ...
	I0718 00:22:26.220100 1940237 ubuntu.go:169] provisioning hostname "missing-upgrade-387571"
	I0718 00:22:26.220169 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:26.254639 1940237 main.go:141] libmachine: Using SSH client type: native
	I0718 00:22:26.255121 1940237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34862 <nil> <nil>}
	I0718 00:22:26.255134 1940237 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-387571 && echo "missing-upgrade-387571" | sudo tee /etc/hostname
	I0718 00:22:26.453718 1940237 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-387571
	
	I0718 00:22:26.453812 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:26.477887 1940237 main.go:141] libmachine: Using SSH client type: native
	I0718 00:22:26.478325 1940237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34862 <nil> <nil>}
	I0718 00:22:26.478350 1940237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-387571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-387571/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-387571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 00:22:26.670563 1940237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 00:22:26.670614 1940237 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0718 00:22:26.670639 1940237 ubuntu.go:177] setting up certificates
	I0718 00:22:26.670651 1940237 provision.go:83] configureAuth start
	I0718 00:22:26.670740 1940237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-387571
	I0718 00:22:26.700060 1940237 provision.go:138] copyHostCerts
	I0718 00:22:26.700151 1940237 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0718 00:22:26.700163 1940237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:22:26.700249 1940237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0718 00:22:26.700364 1940237 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0718 00:22:26.700369 1940237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:22:26.700400 1940237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0718 00:22:26.700465 1940237 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0718 00:22:26.700473 1940237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:22:26.700505 1940237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0718 00:22:26.700561 1940237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-387571 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-387571]
	I0718 00:22:27.101598 1940237 provision.go:172] copyRemoteCerts
	I0718 00:22:27.101675 1940237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 00:22:27.101725 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:27.121061 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:27.223905 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0718 00:22:27.248357 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 00:22:27.271808 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 00:22:27.295242 1940237 provision.go:86] duration metric: configureAuth took 624.567212ms
	I0718 00:22:27.295264 1940237 ubuntu.go:193] setting minikube options for container-runtime
	I0718 00:22:27.295449 1940237 config.go:182] Loaded profile config "missing-upgrade-387571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:22:27.295551 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:27.313924 1940237 main.go:141] libmachine: Using SSH client type: native
	I0718 00:22:27.314358 1940237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34862 <nil> <nil>}
	I0718 00:22:27.314382 1940237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0718 00:22:27.829265 1940237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0718 00:22:27.829291 1940237 machine.go:91] provisioned docker machine in 1.60919979s
	I0718 00:22:27.829300 1940237 client.go:171] LocalClient.Create took 4.39439604s
	I0718 00:22:27.829312 1940237 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-387571" took 4.394430082s
	I0718 00:22:27.829323 1940237 start.go:300] post-start starting for "missing-upgrade-387571" (driver="docker")
	I0718 00:22:27.829332 1940237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 00:22:27.829396 1940237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 00:22:27.829442 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:27.857214 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:27.965755 1940237 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 00:22:27.971422 1940237 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 00:22:27.971449 1940237 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 00:22:27.971461 1940237 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 00:22:27.971468 1940237 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0718 00:22:27.971477 1940237 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0718 00:22:27.971537 1940237 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0718 00:22:27.971619 1940237 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0718 00:22:27.971726 1940237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 00:22:27.983815 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:22:28.018362 1940237 start.go:303] post-start completed in 189.024187ms
	I0718 00:22:28.018833 1940237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-387571
	I0718 00:22:28.047072 1940237 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/missing-upgrade-387571/config.json ...
	I0718 00:22:28.047367 1940237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:22:28.047420 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:28.074580 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:28.183012 1940237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 00:22:28.191623 1940237 start.go:128] duration metric: createHost completed in 4.759840052s
	I0718 00:22:28.191714 1940237 cli_runner.go:164] Run: docker container inspect missing-upgrade-387571 --format={{.State.Status}}
	W0718 00:22:28.238832 1940237 fix.go:128] unexpected machine state, will restart: <nil>
	I0718 00:22:28.238855 1940237 machine.go:88] provisioning docker machine ...
	I0718 00:22:28.238872 1940237 ubuntu.go:169] provisioning hostname "missing-upgrade-387571"
	I0718 00:22:28.238935 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:28.281187 1940237 main.go:141] libmachine: Using SSH client type: native
	I0718 00:22:28.281640 1940237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34862 <nil> <nil>}
	I0718 00:22:28.281652 1940237 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-387571 && echo "missing-upgrade-387571" | sudo tee /etc/hostname
	I0718 00:22:28.482599 1940237 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-387571
	
	I0718 00:22:28.482745 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:28.527886 1940237 main.go:141] libmachine: Using SSH client type: native
	I0718 00:22:28.528321 1940237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34862 <nil> <nil>}
	I0718 00:22:28.528347 1940237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-387571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-387571/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-387571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 00:22:28.707183 1940237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 00:22:28.707210 1940237 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0718 00:22:28.707227 1940237 ubuntu.go:177] setting up certificates
	I0718 00:22:28.707236 1940237 provision.go:83] configureAuth start
	I0718 00:22:28.707303 1940237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-387571
	I0718 00:22:28.729568 1940237 provision.go:138] copyHostCerts
	I0718 00:22:28.729682 1940237 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0718 00:22:28.729727 1940237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:22:28.729841 1940237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0718 00:22:28.729990 1940237 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0718 00:22:28.730016 1940237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:22:28.730073 1940237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0718 00:22:28.730182 1940237 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0718 00:22:28.730207 1940237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:22:28.730274 1940237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0718 00:22:28.730375 1940237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-387571 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-387571]
	I0718 00:22:29.200155 1940237 provision.go:172] copyRemoteCerts
	I0718 00:22:29.200226 1940237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 00:22:29.200272 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:29.218719 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:29.329239 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 00:22:29.357972 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0718 00:22:29.389898 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 00:22:29.418791 1940237 provision.go:86] duration metric: configureAuth took 711.541297ms
	I0718 00:22:29.418866 1940237 ubuntu.go:193] setting minikube options for container-runtime
	I0718 00:22:29.419110 1940237 config.go:182] Loaded profile config "missing-upgrade-387571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:22:29.419269 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:29.451924 1940237 main.go:141] libmachine: Using SSH client type: native
	I0718 00:22:29.452363 1940237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34862 <nil> <nil>}
	I0718 00:22:29.452379 1940237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0718 00:22:29.921660 1940237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0718 00:22:29.921731 1940237 machine.go:91] provisioned docker machine in 1.682868516s
	I0718 00:22:29.921756 1940237 start.go:300] post-start starting for "missing-upgrade-387571" (driver="docker")
	I0718 00:22:29.921777 1940237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 00:22:29.921890 1940237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 00:22:29.921965 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:29.970638 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:30.086883 1940237 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 00:22:30.092463 1940237 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 00:22:30.092488 1940237 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 00:22:30.092499 1940237 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 00:22:30.092506 1940237 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0718 00:22:30.092516 1940237 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0718 00:22:30.092582 1940237 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0718 00:22:30.092663 1940237 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0718 00:22:30.092769 1940237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 00:22:30.107822 1940237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:22:30.154766 1940237 start.go:303] post-start completed in 232.983548ms
	I0718 00:22:30.154935 1940237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:22:30.155021 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:30.186603 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:30.313193 1940237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 00:22:30.326901 1940237 fix.go:56] fixHost completed within 26.508390618s
	I0718 00:22:30.326976 1940237 start.go:83] releasing machines lock for "missing-upgrade-387571", held for 26.508498785s
	I0718 00:22:30.327091 1940237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-387571
	I0718 00:22:30.367930 1940237 ssh_runner.go:195] Run: cat /version.json
	I0718 00:22:30.367984 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:30.368278 1940237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 00:22:30.368334 1940237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-387571
	I0718 00:22:30.424346 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	I0718 00:22:30.438612 1940237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34862 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/missing-upgrade-387571/id_rsa Username:docker}
	W0718 00:22:30.555245 1940237 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0718 00:22:30.555379 1940237 ssh_runner.go:195] Run: systemctl --version
	I0718 00:22:30.678953 1940237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0718 00:22:30.886721 1940237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 00:22:30.894726 1940237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:22:30.937472 1940237 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0718 00:22:30.937561 1940237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:22:31.003769 1940237 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 00:22:31.003821 1940237 start.go:466] detecting cgroup driver to use...
	I0718 00:22:31.003858 1940237 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0718 00:22:31.003930 1940237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 00:22:31.044485 1940237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 00:22:31.061211 1940237 docker.go:196] disabling cri-docker service (if available) ...
	I0718 00:22:31.061289 1940237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0718 00:22:31.084639 1940237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0718 00:22:31.105189 1940237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0718 00:22:31.131125 1940237 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0718 00:22:31.131204 1940237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0718 00:22:31.329440 1940237 docker.go:212] disabling docker service ...
	I0718 00:22:31.329517 1940237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0718 00:22:31.351605 1940237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0718 00:22:31.369811 1940237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0718 00:22:31.546508 1940237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0718 00:22:31.714794 1940237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0718 00:22:31.731045 1940237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 00:22:31.754506 1940237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0718 00:22:31.754587 1940237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:22:31.778597 1940237 out.go:177] 
	W0718 00:22:31.780364 1940237 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0718 00:22:31.780388 1940237 out.go:239] * 
	* 
	W0718 00:22:31.781532 1940237 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 00:22:31.782981 1940237 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-387571 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-18 00:22:31.832819382 +0000 UTC m=+2710.550889730
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-387571
helpers_test.go:235: (dbg) docker inspect missing-upgrade-387571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e21320251c56bac21e5147af00c4cb441870eed139f0d1d94d544b7a5e9a63a5",
	        "Created": "2023-07-18T00:22:24.313643753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1942433,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-18T00:22:24.724327384Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/e21320251c56bac21e5147af00c4cb441870eed139f0d1d94d544b7a5e9a63a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e21320251c56bac21e5147af00c4cb441870eed139f0d1d94d544b7a5e9a63a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e21320251c56bac21e5147af00c4cb441870eed139f0d1d94d544b7a5e9a63a5/hosts",
	        "LogPath": "/var/lib/docker/containers/e21320251c56bac21e5147af00c4cb441870eed139f0d1d94d544b7a5e9a63a5/e21320251c56bac21e5147af00c4cb441870eed139f0d1d94d544b7a5e9a63a5-json.log",
	        "Name": "/missing-upgrade-387571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-387571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-387571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8d0675fb2a2ea01b9568e393096f9404f140b5d3b1fdd7690b44d8feeb4cf13-init/diff:/var/lib/docker/overlay2/e8f3520c062cbac8f4631e1fcebc75cec97053cbc5d26b4c8dd05cb03dd26b9f/diff:/var/lib/docker/overlay2/793ee46f34d3b51c5b900c6ec2416c6666480fdafde073ff15a2eff1505c8e09/diff:/var/lib/docker/overlay2/83549c5070cad484ad09e2c17aaa5aa218411e7afdcc9ba663c8deed5cbeed03/diff:/var/lib/docker/overlay2/5ea1c9ee855ad95c20a0e451f2aad91c10e3987a4e249a27c47033f42901a759/diff:/var/lib/docker/overlay2/4a1e68ddfa6873b033eb8dd336afbada395b79e2caf6480ac49810f62a2777a6/diff:/var/lib/docker/overlay2/d00c85a1df167208c1eacb3fd3e1b87d552868c1dfce71f747ae5b0a2e371e54/diff:/var/lib/docker/overlay2/2f8d0c26778ef928fe13459abb1cee824232c1d3f786ceb715bf721f29916723/diff:/var/lib/docker/overlay2/f54d778fcf9fd2e754dde7b1f359cedc4a02e51cae0adf252b4c7644f7ab6e52/diff:/var/lib/docker/overlay2/623215d077707760cbd2323baae16e79345a653d16abb4e5a156caae4d5c83f3/diff:/var/lib/docker/overlay2/3006f1
7882d749d0f13570378ef4ae6acd949242e553f66af92d25baa2324c50/diff:/var/lib/docker/overlay2/74e79838c2797221007fb2727eeffdf50d840bcdb6eb27c0abc191c7bd522049/diff:/var/lib/docker/overlay2/921888b75d425f05a5484d114b7041987181d84efed74ff45a55d06c5fa9b533/diff:/var/lib/docker/overlay2/3422c4c8f953f5c25d787fdb8d952f329d4c3a80b2518f3cec621fd50f911cc0/diff:/var/lib/docker/overlay2/524f27cc7fe4dd0e3abb281ba7287a0f9117d8f578071dd2c3c0303e075fd98c/diff:/var/lib/docker/overlay2/8a59878375fbd3112820cee6a5863adf148af470d528b788e73be87834fac91c/diff:/var/lib/docker/overlay2/a549627c77e380c37082b7b7a15bfbe194240033c4402568303c623304d3dabc/diff:/var/lib/docker/overlay2/c7792f75ad1dddf6ce493bfe78ab2ba912145e62484e495529b9d2c16d67bae8/diff:/var/lib/docker/overlay2/9f8912c5c52319bc01aa96c322c204285fe465cf9f68a5aeba11a9acbefdd186/diff:/var/lib/docker/overlay2/1724530b3c0b1a1d4c2aa5906d7567eb7216dde9b0298b78c14e24a2c91a9b0c/diff:/var/lib/docker/overlay2/65baae65ba01917dd80e039bee7ea510716d27b7db5f6f31417b79395c0388a3/diff:/var/lib/d
ocker/overlay2/fb636680aeaef5f875cc2a1d0c023cf3877fbe00f75828aaeea087687e25e25a/diff:/var/lib/docker/overlay2/2e3c4e5bb7f321ba9984cbec708aba1f2297a8556ca406f8794cffdd7bc0acbf/diff:/var/lib/docker/overlay2/595d9c5e6a7eb460d52c1f2c997dbbc8f01dbd7e121de82952c5c51a53ced730/diff:/var/lib/docker/overlay2/7bea71b9cc12ff0aaebc5106c9ad3239ecc9956b5924a834e102795040639d66/diff:/var/lib/docker/overlay2/40e7fd7cb12c31fa4558e2574a5f10890588e391cadc8dbc5f117ca05e872601/diff:/var/lib/docker/overlay2/7f2d7ec3178f6f9f1b55caa996834543c3c0ec556a36cd33e8422edb058888a0/diff:/var/lib/docker/overlay2/c0e4d51600ca1ab9e3d4f9a935d1b3737219b5d9406227847281e334b65fc344/diff:/var/lib/docker/overlay2/8f7e058bf3f63e124338ad660a78fc2696e6bdac680821efeb95112f63be6b9f/diff:/var/lib/docker/overlay2/5d5dfc4fa683c2144d5e35e02d67b656b2e567f08098c007e91b5d0e042b09d1/diff:/var/lib/docker/overlay2/f4163654836a3e34c232396d591e8a6737f72ecc66fae5d435a51fd932c8c9db/diff:/var/lib/docker/overlay2/b2bec3e686b7be1acfb2d166633fdfd64c3028786ddbb473306d34249a6
e5406/diff:/var/lib/docker/overlay2/72bed38f9a14f6c0eb1a2ff0d41a82df3b49f06c531f554650fea34095d43d50/diff:/var/lib/docker/overlay2/258e925377794f47020c326f5fb535623488ff1fa9d149f355bc4980c2af5ca0/diff:/var/lib/docker/overlay2/d43811e11f3424679b1b7872e580b673ef9d85b0934e4eb488e2a6f8890989e3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8d0675fb2a2ea01b9568e393096f9404f140b5d3b1fdd7690b44d8feeb4cf13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8d0675fb2a2ea01b9568e393096f9404f140b5d3b1fdd7690b44d8feeb4cf13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8d0675fb2a2ea01b9568e393096f9404f140b5d3b1fdd7690b44d8feeb4cf13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-387571",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-387571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-387571",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-387571",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-387571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "199980177a341860aa90a846d63a88db2b8fcc6cf541fc751a78d62fbf07105e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34862"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34861"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34858"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34860"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34859"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/199980177a34",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-387571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e21320251c56",
	                        "missing-upgrade-387571"
	                    ],
	                    "NetworkID": "684a706c0e4e2a956121e4dc19a88457d60036001adddbaefe3063a0340c8e51",
	                    "EndpointID": "dd87cc51e565d5212686953550ab6ef0e670dc3dc728203bea4f0f38f1573887",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-387571 -n missing-upgrade-387571
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-387571 -n missing-upgrade-387571: exit status 6 (522.41499ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 00:22:32.372419 1943619 status.go:415] kubeconfig endpoint: got: 192.168.59.244:8443, want: 192.168.67.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-387571" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-387571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-387571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-387571: (2.202128428s)
--- FAIL: TestMissingContainerUpgrade (134.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.1707884895.exe start -p stopped-upgrade-954789 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.1707884895.exe start -p stopped-upgrade-954789 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.682897685s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.1707884895.exe -p stopped-upgrade-954789 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.1707884895.exe -p stopped-upgrade-954789 stop: (3.627869115s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-954789 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-954789 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.831618432s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-954789] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-954789 in cluster stopped-upgrade-954789
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-954789" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 00:24:25.061896 1954114 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:24:25.062367 1954114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:24:25.062423 1954114 out.go:309] Setting ErrFile to fd 2...
	I0718 00:24:25.062460 1954114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:24:25.062738 1954114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:24:25.063427 1954114 out.go:303] Setting JSON to false
	I0718 00:24:25.064983 1954114 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32809,"bootTime":1689607056,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0718 00:24:25.065097 1954114 start.go:138] virtualization:  
	I0718 00:24:25.070484 1954114 out.go:177] * [stopped-upgrade-954789] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0718 00:24:25.073188 1954114 out.go:177]   - MINIKUBE_LOCATION=16899
	I0718 00:24:25.073244 1954114 notify.go:220] Checking for updates...
	I0718 00:24:25.079182 1954114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 00:24:25.081144 1954114 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:24:25.082990 1954114 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0718 00:24:25.084768 1954114 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0718 00:24:25.086532 1954114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 00:24:25.088982 1954114 config.go:182] Loaded profile config "stopped-upgrade-954789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:24:25.091389 1954114 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0718 00:24:25.093573 1954114 driver.go:373] Setting default libvirt URI to qemu:///system
	I0718 00:24:25.130581 1954114 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0718 00:24:25.130678 1954114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:24:25.244212 1954114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-18 00:24:25.233840686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:24:25.244318 1954114 docker.go:294] overlay module found
	I0718 00:24:25.248640 1954114 out.go:177] * Using the docker driver based on existing profile
	I0718 00:24:25.250612 1954114 start.go:298] selected driver: docker
	I0718 00:24:25.250629 1954114 start.go:880] validating driver "docker" against &{Name:stopped-upgrade-954789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-954789 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.166 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:24:25.250742 1954114 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 00:24:25.251354 1954114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:24:25.371610 1954114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-18 00:24:25.36200727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:24:25.371910 1954114 cni.go:84] Creating CNI manager for ""
	I0718 00:24:25.371919 1954114 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0718 00:24:25.371932 1954114 start_flags.go:319] config:
	{Name:stopped-upgrade-954789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-954789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.166 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0718 00:24:25.374859 1954114 out.go:177] * Starting control plane node stopped-upgrade-954789 in cluster stopped-upgrade-954789
	I0718 00:24:25.377110 1954114 cache.go:122] Beginning downloading kic base image for docker with crio
	I0718 00:24:25.380283 1954114 out.go:177] * Pulling base image ...
	I0718 00:24:25.382199 1954114 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0718 00:24:25.382398 1954114 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0718 00:24:25.405987 1954114 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0718 00:24:25.406010 1954114 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0718 00:24:25.453539 1954114 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0718 00:24:25.453689 1954114 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/stopped-upgrade-954789/config.json ...
	I0718 00:24:25.453941 1954114 cache.go:195] Successfully downloaded all kic artifacts
	I0718 00:24:25.453985 1954114 start.go:365] acquiring machines lock for stopped-upgrade-954789: {Name:mkafeac248273843e2402d2630f0490619970c75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.454049 1954114 start.go:369] acquired machines lock for "stopped-upgrade-954789" in 36.758µs
	I0718 00:24:25.454069 1954114 start.go:96] Skipping create...Using existing machine configuration
	I0718 00:24:25.454075 1954114 fix.go:54] fixHost starting: 
	I0718 00:24:25.454341 1954114 cli_runner.go:164] Run: docker container inspect stopped-upgrade-954789 --format={{.State.Status}}
	I0718 00:24:25.454661 1954114 cache.go:107] acquiring lock: {Name:mkf3adb8fce5e1fb5ae0829224518143650ee450 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.454725 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 00:24:25.454738 1954114 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.921µs
	I0718 00:24:25.454746 1954114 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 00:24:25.454756 1954114 cache.go:107] acquiring lock: {Name:mkb4d63214113931a675a1f85c29d2bb8e46d535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.454790 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0718 00:24:25.454797 1954114 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 42.748µs
	I0718 00:24:25.454805 1954114 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0718 00:24:25.454814 1954114 cache.go:107] acquiring lock: {Name:mk87fca18702787d10d8299b31d75f1bf5b34273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.454849 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0718 00:24:25.454858 1954114 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 44.332µs
	I0718 00:24:25.454865 1954114 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0718 00:24:25.454873 1954114 cache.go:107] acquiring lock: {Name:mk8d356e5a8e8ecd7c773e8f4561e6eda01a0db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.454898 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0718 00:24:25.454906 1954114 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 34.191µs
	I0718 00:24:25.454913 1954114 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0718 00:24:25.454921 1954114 cache.go:107] acquiring lock: {Name:mk0bcf2410225d8090ba3303a181548f1c4c100a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.454950 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0718 00:24:25.454960 1954114 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 37.99µs
	I0718 00:24:25.454966 1954114 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0718 00:24:25.454975 1954114 cache.go:107] acquiring lock: {Name:mkabc6d6057401f474cbacab1c730a5ff7e2d6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.455004 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0718 00:24:25.455011 1954114 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 37.193µs
	I0718 00:24:25.455018 1954114 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0718 00:24:25.455027 1954114 cache.go:107] acquiring lock: {Name:mk1437ec8761366896c8ceb88aba8606743914b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.455058 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0718 00:24:25.455067 1954114 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 41.517µs
	I0718 00:24:25.455073 1954114 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0718 00:24:25.455081 1954114 cache.go:107] acquiring lock: {Name:mk9a6d3366467c89c6960be774c9154764df9767 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 00:24:25.455109 1954114 cache.go:115] /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0718 00:24:25.455117 1954114 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 36.422µs
	I0718 00:24:25.455124 1954114 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0718 00:24:25.455129 1954114 cache.go:87] Successfully saved all images to host disk.
	I0718 00:24:25.475985 1954114 fix.go:102] recreateIfNeeded on stopped-upgrade-954789: state=Stopped err=<nil>
	W0718 00:24:25.476015 1954114 fix.go:128] unexpected machine state, will restart: <nil>
	I0718 00:24:25.478629 1954114 out.go:177] * Restarting existing docker container for "stopped-upgrade-954789" ...
	I0718 00:24:25.480359 1954114 cli_runner.go:164] Run: docker start stopped-upgrade-954789
	I0718 00:24:25.982043 1954114 cli_runner.go:164] Run: docker container inspect stopped-upgrade-954789 --format={{.State.Status}}
	I0718 00:24:26.005604 1954114 kic.go:426] container "stopped-upgrade-954789" state is running.
	I0718 00:24:26.007477 1954114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-954789
	I0718 00:24:26.031532 1954114 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/stopped-upgrade-954789/config.json ...
	I0718 00:24:26.031763 1954114 machine.go:88] provisioning docker machine ...
	I0718 00:24:26.031786 1954114 ubuntu.go:169] provisioning hostname "stopped-upgrade-954789"
	I0718 00:24:26.031840 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:26.053221 1954114 main.go:141] libmachine: Using SSH client type: native
	I0718 00:24:26.053676 1954114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34874 <nil> <nil>}
	I0718 00:24:26.053695 1954114 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-954789 && echo "stopped-upgrade-954789" | sudo tee /etc/hostname
	I0718 00:24:26.054509 1954114 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0718 00:24:29.218131 1954114 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-954789
	
	I0718 00:24:29.218205 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:29.240577 1954114 main.go:141] libmachine: Using SSH client type: native
	I0718 00:24:29.241013 1954114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34874 <nil> <nil>}
	I0718 00:24:29.241031 1954114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-954789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-954789/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-954789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 00:24:29.383571 1954114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 00:24:29.383596 1954114 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-1800837/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-1800837/.minikube}
	I0718 00:24:29.383626 1954114 ubuntu.go:177] setting up certificates
	I0718 00:24:29.383634 1954114 provision.go:83] configureAuth start
	I0718 00:24:29.383701 1954114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-954789
	I0718 00:24:29.406458 1954114 provision.go:138] copyHostCerts
	I0718 00:24:29.406526 1954114 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem, removing ...
	I0718 00:24:29.406546 1954114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem
	I0718 00:24:29.406600 1954114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.pem (1082 bytes)
	I0718 00:24:29.406698 1954114 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem, removing ...
	I0718 00:24:29.406708 1954114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem
	I0718 00:24:29.406729 1954114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/cert.pem (1123 bytes)
	I0718 00:24:29.406784 1954114 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem, removing ...
	I0718 00:24:29.406792 1954114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem
	I0718 00:24:29.406810 1954114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-1800837/.minikube/key.pem (1675 bytes)
	I0718 00:24:29.406923 1954114 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-954789 san=[192.168.70.166 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-954789]
	I0718 00:24:29.785572 1954114 provision.go:172] copyRemoteCerts
	I0718 00:24:29.785639 1954114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 00:24:29.785684 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:29.805208 1954114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34874 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/stopped-upgrade-954789/id_rsa Username:docker}
	I0718 00:24:29.904670 1954114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 00:24:29.933601 1954114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0718 00:24:29.968607 1954114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 00:24:29.993613 1954114 provision.go:86] duration metric: configureAuth took 609.967021ms
	I0718 00:24:29.993635 1954114 ubuntu.go:193] setting minikube options for container-runtime
	I0718 00:24:29.993817 1954114 config.go:182] Loaded profile config "stopped-upgrade-954789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0718 00:24:29.993919 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:30.016632 1954114 main.go:141] libmachine: Using SSH client type: native
	I0718 00:24:30.017076 1954114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34874 <nil> <nil>}
	I0718 00:24:30.017094 1954114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0718 00:24:30.482837 1954114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0718 00:24:30.482859 1954114 machine.go:91] provisioned docker machine in 4.451078846s
	I0718 00:24:30.482870 1954114 start.go:300] post-start starting for "stopped-upgrade-954789" (driver="docker")
	I0718 00:24:30.482879 1954114 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 00:24:30.482943 1954114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 00:24:30.482994 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:30.510435 1954114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34874 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/stopped-upgrade-954789/id_rsa Username:docker}
	I0718 00:24:30.613255 1954114 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 00:24:30.617344 1954114 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0718 00:24:30.617371 1954114 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0718 00:24:30.617383 1954114 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0718 00:24:30.617390 1954114 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0718 00:24:30.617400 1954114 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/addons for local assets ...
	I0718 00:24:30.617459 1954114 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-1800837/.minikube/files for local assets ...
	I0718 00:24:30.617542 1954114 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem -> 18062262.pem in /etc/ssl/certs
	I0718 00:24:30.617659 1954114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 00:24:30.626797 1954114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/ssl/certs/18062262.pem --> /etc/ssl/certs/18062262.pem (1708 bytes)
	I0718 00:24:30.655009 1954114 start.go:303] post-start completed in 172.110099ms
	I0718 00:24:30.655121 1954114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:24:30.655199 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:30.681360 1954114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34874 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/stopped-upgrade-954789/id_rsa Username:docker}
	I0718 00:24:30.792639 1954114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 00:24:30.798190 1954114 fix.go:56] fixHost completed within 5.344092941s
	I0718 00:24:30.798245 1954114 start.go:83] releasing machines lock for "stopped-upgrade-954789", held for 5.344183311s
	I0718 00:24:30.798328 1954114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-954789
	I0718 00:24:30.821515 1954114 ssh_runner.go:195] Run: cat /version.json
	I0718 00:24:30.821546 1954114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 00:24:30.821565 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:30.821591 1954114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-954789
	I0718 00:24:30.844249 1954114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34874 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/stopped-upgrade-954789/id_rsa Username:docker}
	I0718 00:24:30.857498 1954114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34874 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/stopped-upgrade-954789/id_rsa Username:docker}
	W0718 00:24:30.946755 1954114 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0718 00:24:30.946902 1954114 ssh_runner.go:195] Run: systemctl --version
	I0718 00:24:31.023751 1954114 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0718 00:24:31.141438 1954114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 00:24:31.148001 1954114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:24:31.171550 1954114 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0718 00:24:31.171748 1954114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 00:24:31.201475 1954114 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 00:24:31.201546 1954114 start.go:466] detecting cgroup driver to use...
	I0718 00:24:31.201589 1954114 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0718 00:24:31.201666 1954114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 00:24:31.239958 1954114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 00:24:31.254820 1954114 docker.go:196] disabling cri-docker service (if available) ...
	I0718 00:24:31.254933 1954114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0718 00:24:31.269149 1954114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0718 00:24:31.285069 1954114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0718 00:24:31.299272 1954114 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0718 00:24:31.299332 1954114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0718 00:24:31.440751 1954114 docker.go:212] disabling docker service ...
	I0718 00:24:31.440823 1954114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0718 00:24:31.455704 1954114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0718 00:24:31.472945 1954114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0718 00:24:31.626209 1954114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0718 00:24:31.767882 1954114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0718 00:24:31.780575 1954114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 00:24:31.799928 1954114 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0718 00:24:31.799995 1954114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0718 00:24:31.817709 1954114 out.go:177] 
	W0718 00:24:31.819686 1954114 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0718 00:24:31.819714 1954114 out.go:239] * 
	* 
	W0718 00:24:31.820742 1954114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 00:24:31.822740 1954114 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-954789 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (72.14s)

                                                
                                    

Test pass (268/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.98
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.27.3/json-events 9.05
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.62
22 TestAddons/Setup 148.67
24 TestAddons/parallel/Registry 16.81
26 TestAddons/parallel/InspektorGadget 10.84
27 TestAddons/parallel/MetricsServer 5.85
30 TestAddons/parallel/CSI 57.33
31 TestAddons/parallel/Headlamp 12.83
32 TestAddons/parallel/CloudSpanner 5.68
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.31
37 TestCertOptions 38.8
38 TestCertExpiration 271.62
40 TestForceSystemdFlag 43.14
41 TestForceSystemdEnv 43.1
47 TestErrorSpam/setup 29.58
48 TestErrorSpam/start 0.79
49 TestErrorSpam/status 1.07
50 TestErrorSpam/pause 1.86
51 TestErrorSpam/unpause 1.94
52 TestErrorSpam/stop 1.45
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 79.02
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 43.37
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
64 TestFunctional/serial/CacheCmd/cache/add_local 1.06
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.07
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
69 TestFunctional/serial/CacheCmd/cache/delete 0.11
70 TestFunctional/serial/MinikubeKubectlCmd 0.14
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 36.69
73 TestFunctional/serial/ComponentHealth 0.11
74 TestFunctional/serial/LogsCmd 1.82
75 TestFunctional/serial/LogsFileCmd 1.87
76 TestFunctional/serial/InvalidService 4.41
78 TestFunctional/parallel/ConfigCmd 0.51
79 TestFunctional/parallel/DashboardCmd 13.57
80 TestFunctional/parallel/DryRun 0.51
81 TestFunctional/parallel/InternationalLanguage 0.2
82 TestFunctional/parallel/StatusCmd 1.18
86 TestFunctional/parallel/ServiceCmdConnect 10.95
87 TestFunctional/parallel/AddonsCmd 0.22
88 TestFunctional/parallel/PersistentVolumeClaim 25.06
90 TestFunctional/parallel/SSHCmd 0.78
91 TestFunctional/parallel/CpCmd 1.48
93 TestFunctional/parallel/FileSync 0.4
94 TestFunctional/parallel/CertSync 2.07
98 TestFunctional/parallel/NodeLabels 0.1
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
103 TestFunctional/parallel/Version/short 0.08
104 TestFunctional/parallel/Version/components 1
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
109 TestFunctional/parallel/ImageCommands/ImageBuild 3
110 TestFunctional/parallel/ImageCommands/Setup 1.81
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.89
115 TestFunctional/parallel/ServiceCmd/DeployApp 12.33
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.92
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.02
118 TestFunctional/parallel/ServiceCmd/List 0.49
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
121 TestFunctional/parallel/ServiceCmd/Format 0.5
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.06
123 TestFunctional/parallel/ServiceCmd/URL 0.64
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.99
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.8
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.66
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.92
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
139 TestFunctional/parallel/ProfileCmd/profile_list 0.44
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
141 TestFunctional/parallel/MountCmd/any-port 8.25
142 TestFunctional/parallel/MountCmd/specific-port 2.11
143 TestFunctional/parallel/MountCmd/VerifyCleanup 3.21
144 TestFunctional/delete_addon-resizer_images 0.1
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 88.63
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.49
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
157 TestJSONOutput/start/Command 77.78
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.83
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.75
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.94
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 46.78
183 TestKicCustomNetwork/use_default_bridge_network 33.05
184 TestKicExistingNetwork 38.97
185 TestKicCustomSubnet 34.87
186 TestKicStaticIP 34.58
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 67.85
191 TestMountStart/serial/StartWithMountFirst 8.53
192 TestMountStart/serial/VerifyMountFirst 0.29
193 TestMountStart/serial/StartWithMountSecond 7.31
194 TestMountStart/serial/VerifyMountSecond 0.3
195 TestMountStart/serial/DeleteFirst 1.66
196 TestMountStart/serial/VerifyMountPostDelete 0.27
197 TestMountStart/serial/Stop 1.22
198 TestMountStart/serial/RestartStopped 7.96
199 TestMountStart/serial/VerifyMountPostStop 0.28
202 TestMultiNode/serial/FreshStart2Nodes 125.57
203 TestMultiNode/serial/DeployApp2Nodes 5.96
205 TestMultiNode/serial/AddNode 63.83
206 TestMultiNode/serial/ProfileList 0.33
207 TestMultiNode/serial/CopyFile 10.7
208 TestMultiNode/serial/StopNode 2.35
209 TestMultiNode/serial/StartAfterStop 12.47
210 TestMultiNode/serial/RestartKeepsNodes 120.51
211 TestMultiNode/serial/DeleteNode 5.06
212 TestMultiNode/serial/StopMultiNode 24
213 TestMultiNode/serial/RestartMultiNode 89.87
214 TestMultiNode/serial/ValidateNameConflict 36.81
219 TestPreload 171.5
221 TestScheduledStopUnix 109.4
224 TestInsufficientStorage 13.11
227 TestKubernetesUpgrade 381.46
230 TestPause/serial/Start 87.84
232 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
233 TestNoKubernetes/serial/StartWithK8s 44.43
234 TestNoKubernetes/serial/StartWithStopK8s 6.59
235 TestNoKubernetes/serial/Start 9.28
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
237 TestNoKubernetes/serial/ProfileList 1
238 TestNoKubernetes/serial/Stop 1.25
239 TestNoKubernetes/serial/StartNoArgs 7.59
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
248 TestNetworkPlugins/group/false 3.79
252 TestPause/serial/SecondStartNoReconfiguration 44.12
253 TestPause/serial/Pause 1.26
254 TestPause/serial/VerifyStatus 0.91
255 TestPause/serial/Unpause 1.35
256 TestPause/serial/PauseAgain 1.57
257 TestPause/serial/DeletePaused 2.97
258 TestPause/serial/VerifyDeletedResources 0.45
259 TestStoppedBinaryUpgrade/Setup 1.03
268 TestNetworkPlugins/group/auto/Start 81.4
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
270 TestNetworkPlugins/group/kindnet/Start 84.32
271 TestNetworkPlugins/group/auto/KubeletFlags 0.32
272 TestNetworkPlugins/group/auto/NetCatPod 11.45
273 TestNetworkPlugins/group/auto/DNS 0.26
274 TestNetworkPlugins/group/auto/Localhost 0.21
275 TestNetworkPlugins/group/auto/HairPin 0.2
276 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
277 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
278 TestNetworkPlugins/group/kindnet/NetCatPod 12.5
279 TestNetworkPlugins/group/calico/Start 80.07
280 TestNetworkPlugins/group/kindnet/DNS 0.3
281 TestNetworkPlugins/group/kindnet/Localhost 0.25
282 TestNetworkPlugins/group/kindnet/HairPin 0.25
283 TestNetworkPlugins/group/custom-flannel/Start 76.91
284 TestNetworkPlugins/group/calico/ControllerPod 5.05
285 TestNetworkPlugins/group/calico/KubeletFlags 0.33
286 TestNetworkPlugins/group/calico/NetCatPod 10.46
287 TestNetworkPlugins/group/calico/DNS 0.22
288 TestNetworkPlugins/group/calico/Localhost 0.2
289 TestNetworkPlugins/group/calico/HairPin 0.23
290 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.54
291 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.59
292 TestNetworkPlugins/group/custom-flannel/DNS 0.28
293 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
294 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
295 TestNetworkPlugins/group/enable-default-cni/Start 92.71
296 TestNetworkPlugins/group/flannel/Start 68.4
297 TestNetworkPlugins/group/flannel/ControllerPod 5.03
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.41
300 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
301 TestNetworkPlugins/group/flannel/NetCatPod 11.53
302 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
303 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
304 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
305 TestNetworkPlugins/group/flannel/DNS 0.23
306 TestNetworkPlugins/group/flannel/Localhost 0.21
307 TestNetworkPlugins/group/flannel/HairPin 0.18
308 TestNetworkPlugins/group/bridge/Start 94.79
310 TestStartStop/group/old-k8s-version/serial/FirstStart 128.77
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
312 TestNetworkPlugins/group/bridge/NetCatPod 13.37
313 TestNetworkPlugins/group/bridge/DNS 0.24
314 TestNetworkPlugins/group/bridge/Localhost 0.18
315 TestNetworkPlugins/group/bridge/HairPin 0.19
317 TestStartStop/group/no-preload/serial/FirstStart 69.88
318 TestStartStop/group/old-k8s-version/serial/DeployApp 11.54
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.82
320 TestStartStop/group/old-k8s-version/serial/Stop 13.15
321 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
322 TestStartStop/group/old-k8s-version/serial/SecondStart 435.18
323 TestStartStop/group/no-preload/serial/DeployApp 9.64
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
325 TestStartStop/group/no-preload/serial/Stop 12.02
326 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/no-preload/serial/SecondStart 626.96
328 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
329 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
330 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
331 TestStartStop/group/old-k8s-version/serial/Pause 3.39
333 TestStartStop/group/embed-certs/serial/FirstStart 76.31
334 TestStartStop/group/embed-certs/serial/DeployApp 10.53
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
336 TestStartStop/group/embed-certs/serial/Stop 12.15
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
338 TestStartStop/group/embed-certs/serial/SecondStart 355.76
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
341 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
342 TestStartStop/group/no-preload/serial/Pause 3.43
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.98
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 618.87
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
353 TestStartStop/group/embed-certs/serial/Pause 3.4
355 TestStartStop/group/newest-cni/serial/FirstStart 48.26
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
358 TestStartStop/group/newest-cni/serial/Stop 1.37
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/newest-cni/serial/SecondStart 30.62
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
364 TestStartStop/group/newest-cni/serial/Pause 3.24
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.28
x
+
TestDownloadOnly/v1.16.0/json-events (9.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-823972 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-823972 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.983797566s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-823972
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-823972: exit status 85 (72.812587ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-823972 | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |          |
	|         | -p download-only-823972        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:37:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:37:21.373071 1806231 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:37:21.373301 1806231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:37:21.373327 1806231 out.go:309] Setting ErrFile to fd 2...
	I0717 23:37:21.373347 1806231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:37:21.373643 1806231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	W0717 23:37:21.373815 1806231 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-1800837/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-1800837/.minikube/config/config.json: no such file or directory
	I0717 23:37:21.374250 1806231 out.go:303] Setting JSON to true
	I0717 23:37:21.375295 1806231 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29986,"bootTime":1689607056,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 23:37:21.375382 1806231 start.go:138] virtualization:  
	I0717 23:37:21.378502 1806231 out.go:97] [download-only-823972] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 23:37:21.380710 1806231 out.go:169] MINIKUBE_LOCATION=16899
	W0717 23:37:21.378751 1806231 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 23:37:21.378801 1806231 notify.go:220] Checking for updates...
	I0717 23:37:21.384507 1806231 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:37:21.386383 1806231 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:37:21.388468 1806231 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0717 23:37:21.390481 1806231 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 23:37:21.393974 1806231 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 23:37:21.394268 1806231 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:37:21.418427 1806231 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 23:37:21.418516 1806231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:37:21.507314 1806231 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-07-17 23:37:21.49647138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:37:21.507425 1806231 docker.go:294] overlay module found
	I0717 23:37:21.509392 1806231 out.go:97] Using the docker driver based on user configuration
	I0717 23:37:21.509419 1806231 start.go:298] selected driver: docker
	I0717 23:37:21.509426 1806231 start.go:880] validating driver "docker" against <nil>
	I0717 23:37:21.509534 1806231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:37:21.584967 1806231 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-07-17 23:37:21.575195621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:37:21.585164 1806231 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 23:37:21.585487 1806231 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 23:37:21.585687 1806231 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 23:37:21.588258 1806231 out.go:169] Using Docker driver with root privileges
	I0717 23:37:21.589973 1806231 cni.go:84] Creating CNI manager for ""
	I0717 23:37:21.590007 1806231 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:37:21.590028 1806231 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 23:37:21.590047 1806231 start_flags.go:319] config:
	{Name:download-only-823972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-823972 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:37:21.592173 1806231 out.go:97] Starting control plane node download-only-823972 in cluster download-only-823972
	I0717 23:37:21.592205 1806231 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 23:37:21.594134 1806231 out.go:97] Pulling base image ...
	I0717 23:37:21.594167 1806231 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 23:37:21.594309 1806231 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 23:37:21.611163 1806231 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 23:37:21.611746 1806231 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 23:37:21.611854 1806231 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 23:37:21.694983 1806231 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0717 23:37:21.695009 1806231 cache.go:57] Caching tarball of preloaded images
	I0717 23:37:21.695156 1806231 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 23:37:21.697501 1806231 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 23:37:21.697522 1806231 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 23:37:21.824186 1806231 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0717 23:37:26.119869 1806231 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-823972"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (9.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-823972 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-823972 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.052380574s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (9.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-823972
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-823972: exit status 85 (78.398599ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-823972 | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |          |
	|         | -p download-only-823972        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-823972 | jenkins | v1.31.0 | 17 Jul 23 23:37 UTC |          |
	|         | -p download-only-823972        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:37:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:37:31.430639 1806309 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:37:31.430839 1806309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:37:31.430868 1806309 out.go:309] Setting ErrFile to fd 2...
	I0717 23:37:31.430888 1806309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:37:31.431205 1806309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	W0717 23:37:31.431357 1806309 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-1800837/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-1800837/.minikube/config/config.json: no such file or directory
	I0717 23:37:31.431605 1806309 out.go:303] Setting JSON to true
	I0717 23:37:31.432583 1806309 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29996,"bootTime":1689607056,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 23:37:31.432676 1806309 start.go:138] virtualization:  
	I0717 23:37:31.435035 1806309 out.go:97] [download-only-823972] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 23:37:31.436846 1806309 out.go:169] MINIKUBE_LOCATION=16899
	I0717 23:37:31.435383 1806309 notify.go:220] Checking for updates...
	I0717 23:37:31.440833 1806309 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:37:31.442860 1806309 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:37:31.444722 1806309 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0717 23:37:31.446500 1806309 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 23:37:31.449852 1806309 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 23:37:31.450433 1806309 config.go:182] Loaded profile config "download-only-823972": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0717 23:37:31.450491 1806309 start.go:788] api.Load failed for download-only-823972: filestore "download-only-823972": Docker machine "download-only-823972" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 23:37:31.450634 1806309 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 23:37:31.450661 1806309 start.go:788] api.Load failed for download-only-823972: filestore "download-only-823972": Docker machine "download-only-823972" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 23:37:31.474669 1806309 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 23:37:31.474747 1806309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:37:31.557274 1806309 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 23:37:31.547552292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:37:31.557446 1806309 docker.go:294] overlay module found
	I0717 23:37:31.559443 1806309 out.go:97] Using the docker driver based on existing profile
	I0717 23:37:31.559467 1806309 start.go:298] selected driver: docker
	I0717 23:37:31.559473 1806309 start.go:880] validating driver "docker" against &{Name:download-only-823972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-823972 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0717 23:37:31.559657 1806309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:37:31.624918 1806309 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 23:37:31.614571814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:37:31.625367 1806309 cni.go:84] Creating CNI manager for ""
	I0717 23:37:31.625386 1806309 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 23:37:31.625398 1806309 start_flags.go:319] config:
	{Name:download-only-823972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-823972 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:37:31.627297 1806309 out.go:97] Starting control plane node download-only-823972 in cluster download-only-823972
	I0717 23:37:31.627323 1806309 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 23:37:31.629352 1806309 out.go:97] Pulling base image ...
	I0717 23:37:31.629377 1806309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:37:31.629526 1806309 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 23:37:31.646276 1806309 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 23:37:31.646378 1806309 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 23:37:31.646403 1806309 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 23:37:31.646448 1806309 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 23:37:31.646456 1806309 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 23:37:31.696380 1806309 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0717 23:37:31.696404 1806309 cache.go:57] Caching tarball of preloaded images
	I0717 23:37:31.696555 1806309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:37:31.698649 1806309 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 23:37:31.698672 1806309 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 ...
	I0717 23:37:31.823896 1806309 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:5385d65818d7d3a2749f9dcda9541749 -> /home/jenkins/minikube-integration/16899-1800837/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-823972"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-823972
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-672652 --alsologtostderr --binary-mirror http://127.0.0.1:44623 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-672652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-672652
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/Setup (148.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-579349 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-579349 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m28.670525412s)
--- PASS: TestAddons/Setup (148.67s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 48.047937ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rn6hq" [bce4f7ce-2d6a-4082-97c3-291f3eb7fcc2] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012060121s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rb9fx" [3e2266e7-468f-409e-bde8-4a46879b119d] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011009571s
addons_test.go:316: (dbg) Run:  kubectl --context addons-579349 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-579349 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-579349 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.692851439s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 ip
2023/07/17 23:40:26 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ccxvc" [892facac-a727-4981-a121-1eaaa930a779] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015164104s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-579349
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-579349: (5.821847174s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.837893ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-4m7tc" [0a10e3b3-112a-449a-9a48-3f6cdaf01d6a] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009416896s
addons_test.go:391: (dbg) Run:  kubectl --context addons-579349 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.944538ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-579349 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-579349 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7a3208dd-e64c-4eed-99cb-bdbb7592be9a] Pending
helpers_test.go:344: "task-pv-pod" [7a3208dd-e64c-4eed-99cb-bdbb7592be9a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7a3208dd-e64c-4eed-99cb-bdbb7592be9a] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.036683553s
addons_test.go:560: (dbg) Run:  kubectl --context addons-579349 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-579349 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-579349 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-579349 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-579349 delete pod task-pv-pod: (1.103295006s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-579349 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-579349 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-579349 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-579349 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a0f7c730-ffaf-439e-821e-dfa60862b0b7] Pending
helpers_test.go:344: "task-pv-pod-restore" [a0f7c730-ffaf-439e-821e-dfa60862b0b7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a0f7c730-ffaf-439e-821e-dfa60862b0b7] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.012354586s
addons_test.go:602: (dbg) Run:  kubectl --context addons-579349 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-579349 delete pod task-pv-pod-restore: (1.113695558s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-579349 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-579349 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-579349 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.815003217s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-579349 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-579349 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-579349 --alsologtostderr -v=1: (1.82335763s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-b6mq4" [90391b5d-def6-43db-8a72-2f6e6a4bafd4] Pending
helpers_test.go:344: "headlamp-66f6498c69-b6mq4" [90391b5d-def6-43db-8a72-2f6e6a4bafd4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-b6mq4" [90391b5d-def6-43db-8a72-2f6e6a4bafd4] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00864549s
--- PASS: TestAddons/parallel/Headlamp (12.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-7zk2q" [ec3168df-8817-483e-9249-ba759489152c] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011880135s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-579349
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-579349 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-579349 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-579349
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-579349: (12.036875895s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-579349
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-579349
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-579349
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (38.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-218011 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-218011 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.119901048s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-218011 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-218011 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-218011 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-218011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-218011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-218011: (1.972082794s)
--- PASS: TestCertOptions (38.80s)

                                                
                                    
x
+
TestCertExpiration (271.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-509872 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-509872 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (43.121617462s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-509872 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0718 00:20:10.910525 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-509872 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (45.942650289s)
helpers_test.go:175: Cleaning up "cert-expiration-509872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-509872
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-509872: (2.552532198s)
--- PASS: TestCertExpiration (271.62s)

                                                
                                    
x
+
TestForceSystemdFlag (43.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-364079 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-364079 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.117837758s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-364079 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-364079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-364079
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-364079: (2.634247611s)
--- PASS: TestForceSystemdFlag (43.14s)

                                                
                                    
x
+
TestForceSystemdEnv (43.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-203847 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-203847 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.688029065s)
helpers_test.go:175: Cleaning up "force-systemd-env-203847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-203847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-203847: (2.408873872s)
--- PASS: TestForceSystemdEnv (43.10s)

                                                
                                    
x
+
TestErrorSpam/setup (29.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-315960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-315960 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-315960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-315960 --driver=docker  --container-runtime=crio: (29.582072721s)
--- PASS: TestErrorSpam/setup (29.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 stop: (1.259185023s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315960 --log_dir /tmp/nospam-315960 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16899-1800837/.minikube/files/etc/test/nested/copy/1806226/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-926032 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0717 23:45:10.909984 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:10.916992 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:10.927295 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:10.947580 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:10.987830 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:11.068122 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:11.228732 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:11.548923 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:12.189767 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:13.469974 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:16.030552 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:21.151225 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:31.391628 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0717 23:45:51.871885 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-926032 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.01634084s)
--- PASS: TestFunctional/serial/StartWithProxy (79.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-926032 --alsologtostderr -v=8
E0717 23:46:32.832183 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-926032 --alsologtostderr -v=8: (43.365773455s)
functional_test.go:659: soft start took 43.366266241s for "functional-926032" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-926032 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 cache add registry.k8s.io/pause:3.1: (1.342258904s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 cache add registry.k8s.io/pause:3.3: (1.477934047s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 cache add registry.k8s.io/pause:latest: (1.308971746s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-926032 /tmp/TestFunctionalserialCacheCmdcacheadd_local575191942/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cache add minikube-local-cache-test:functional-926032
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cache delete minikube-local-cache-test:functional-926032
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-926032
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (328.268427ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 cache reload: (1.07724128s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 kubectl -- --context functional-926032 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-926032 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-926032 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-926032 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.6861804s)
functional_test.go:757: restart took 36.686286163s for "functional-926032" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-926032 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 logs: (1.819343085s)
--- PASS: TestFunctional/serial/LogsCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 logs --file /tmp/TestFunctionalserialLogsFileCmd2359122100/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 logs --file /tmp/TestFunctionalserialLogsFileCmd2359122100/001/logs.txt: (1.873724709s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-926032 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-926032
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-926032: exit status 115 (551.143328ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32646 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-926032 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 config get cpus: exit status 14 (102.153642ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 config get cpus: exit status 14 (82.849623ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-926032 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-926032 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1831812: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-926032 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-926032 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.387415ms)

                                                
                                                
-- stdout --
	* [functional-926032] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 23:48:43.439818 1831537 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:48:43.440066 1831537 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:48:43.440096 1831537 out.go:309] Setting ErrFile to fd 2...
	I0717 23:48:43.440117 1831537 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:48:43.440414 1831537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0717 23:48:43.440802 1831537 out.go:303] Setting JSON to false
	I0717 23:48:43.441856 1831537 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30668,"bootTime":1689607056,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 23:48:43.441974 1831537 start.go:138] virtualization:  
	I0717 23:48:43.444455 1831537 out.go:177] * [functional-926032] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0717 23:48:43.446084 1831537 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:48:43.448002 1831537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:48:43.446306 1831537 notify.go:220] Checking for updates...
	I0717 23:48:43.449827 1831537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:48:43.451415 1831537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0717 23:48:43.452989 1831537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 23:48:43.454632 1831537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:48:43.456848 1831537 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:48:43.457527 1831537 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:48:43.482226 1831537 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 23:48:43.482329 1831537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:48:43.571106 1831537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 23:48:43.561561074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:48:43.571208 1831537 docker.go:294] overlay module found
	I0717 23:48:43.573061 1831537 out.go:177] * Using the docker driver based on existing profile
	I0717 23:48:43.574732 1831537 start.go:298] selected driver: docker
	I0717 23:48:43.574766 1831537 start.go:880] validating driver "docker" against &{Name:functional-926032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-926032 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:48:43.574867 1831537 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:48:43.576814 1831537 out.go:177] 
	W0717 23:48:43.578174 1831537 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 23:48:43.579696 1831537 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-926032 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-926032 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-926032 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.979266ms)

                                                
                                                
-- stdout --
	* [functional-926032] minikube v1.31.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 23:48:43.941703 1831644 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:48:43.941908 1831644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:48:43.941940 1831644 out.go:309] Setting ErrFile to fd 2...
	I0717 23:48:43.941960 1831644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:48:43.942373 1831644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0717 23:48:43.942793 1831644 out.go:303] Setting JSON to false
	I0717 23:48:43.943871 1831644 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30668,"bootTime":1689607056,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0717 23:48:43.943967 1831644 start.go:138] virtualization:  
	I0717 23:48:43.946349 1831644 out.go:177] * [functional-926032] minikube v1.31.0 sur Ubuntu 20.04 (arm64)
	I0717 23:48:43.948983 1831644 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:48:43.950886 1831644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:48:43.949161 1831644 notify.go:220] Checking for updates...
	I0717 23:48:43.952861 1831644 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0717 23:48:43.954819 1831644 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0717 23:48:43.956691 1831644 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 23:48:43.958546 1831644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:48:43.960858 1831644 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:48:43.961452 1831644 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:48:43.986214 1831644 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 23:48:43.986309 1831644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 23:48:44.080111 1831644 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 23:48:44.070131307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 23:48:44.080220 1831644 docker.go:294] overlay module found
	I0717 23:48:44.082039 1831644 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 23:48:44.083517 1831644 start.go:298] selected driver: docker
	I0717 23:48:44.083536 1831644 start.go:880] validating driver "docker" against &{Name:functional-926032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-926032 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:48:44.083640 1831644 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:48:44.086078 1831644 out.go:177] 
	W0717 23:48:44.087792 1831644 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 23:48:44.089380 1831644 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-926032 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-926032 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-tds2j" [30a68790-2ab0-44f9-a4e6-00627ba744cb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-tds2j" [30a68790-2ab0-44f9-a4e6-00627ba744cb] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.028205523s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32334
functional_test.go:1674: http://192.168.49.2:32334: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-tds2j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32334
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.95s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2af33356-bbaf-4d41-a50b-669f6650bf27] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008964656s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-926032 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-926032 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-926032 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-926032 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1b155d63-3341-4b69-aa9e-ecaaac8addcb] Pending
helpers_test.go:344: "sp-pod" [1b155d63-3341-4b69-aa9e-ecaaac8addcb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1b155d63-3341-4b69-aa9e-ecaaac8addcb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008383193s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-926032 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-926032 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-926032 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [01297d8b-95d9-4c4e-a3c7-20f0d2a41110] Pending
helpers_test.go:344: "sp-pod" [01297d8b-95d9-4c4e-a3c7-20f0d2a41110] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [01297d8b-95d9-4c4e-a3c7-20f0d2a41110] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010778163s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-926032 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh -n functional-926032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 cp functional-926032:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2556502656/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh -n functional-926032 "sudo cat /home/docker/cp-test.txt"
E0717 23:47:54.753092 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1806226/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /etc/test/nested/copy/1806226/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1806226.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /etc/ssl/certs/1806226.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1806226.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /usr/share/ca-certificates/1806226.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/18062262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /etc/ssl/certs/18062262.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/18062262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /usr/share/ca-certificates/18062262.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-926032 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh "sudo systemctl is-active docker": exit status 1 (377.443795ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh "sudo systemctl is-active containerd": exit status 1 (441.917379ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 version -o=json --components: (1.00184123s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-926032 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-926032
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-926032 image ls --format short --alsologtostderr:
I0717 23:48:54.221785 1832954 out.go:296] Setting OutFile to fd 1 ...
I0717 23:48:54.221996 1832954 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:54.222017 1832954 out.go:309] Setting ErrFile to fd 2...
I0717 23:48:54.222036 1832954 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:54.222326 1832954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
I0717 23:48:54.223031 1832954 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:54.223261 1832954 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:54.223820 1832954 cli_runner.go:164] Run: docker container inspect functional-926032 --format={{.State.Status}}
I0717 23:48:54.259623 1832954 ssh_runner.go:195] Run: systemctl --version
I0717 23:48:54.260290 1832954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-926032
I0717 23:48:54.298556 1832954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34673 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/functional-926032/id_rsa Username:docker}
I0717 23:48:54.393604 1832954 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-926032 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/google-containers/addon-resizer  | functional-926032  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
| registry.k8s.io/kube-controller-manager | v1.27.3            | ab3683b584ae5 | 109MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 39dfb036b0986 | 116MB  |
| docker.io/library/nginx                 | alpine             | 66bf2c914bf4d | 42.8MB |
| docker.io/library/nginx                 | latest             | 2002d33a54f72 | 196MB  |
| registry.k8s.io/kube-scheduler          | v1.27.3            | bcb9e554eaab6 | 57.6MB |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| localhost/my-image                      | functional-926032  | f1e0a733d98dd | 1.64MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.27.3            | fb73e92641fd5 | 68.1MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-926032 image ls --format table --alsologtostderr:
I0717 23:48:57.845233 1833316 out.go:296] Setting OutFile to fd 1 ...
I0717 23:48:57.845419 1833316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:57.845428 1833316 out.go:309] Setting ErrFile to fd 2...
I0717 23:48:57.845435 1833316 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:57.845717 1833316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
I0717 23:48:57.846398 1833316 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:57.846556 1833316 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:57.847041 1833316 cli_runner.go:164] Run: docker container inspect functional-926032 --format={{.State.Status}}
I0717 23:48:57.876443 1833316 ssh_runner.go:195] Run: systemctl --version
I0717 23:48:57.876497 1833316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-926032
I0717 23:48:57.902474 1833316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34673 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/functional-926032/id_rsa Username:docker}
I0717 23:48:58.005907 1833316 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-926032 image ls --format json --alsologtostderr:
[{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0","registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"108667702"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":["docker.io/library/nginx@sha256:2d194184b
067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:40199b09f65752fed2a540913a037a7a2c3120bd9d4cf20e7d85caafa66381d8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"42812731"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"24bc64e911039ecf00e263be2161797c758b7d824
03ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":["registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"116204496"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"b18bf71b94
1bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"469625fe74529ea4a16d7b49b47d9b8cd8047003aabad81baf7388d43ef99f5f","repoDigests":["docker.io/library/1e359532ab1bde816148edd2652119cda1296d975b204ea637becbc83ed0d941-tmp@sha256:ec4b7e4ae14db6185a055c12cbcb2f8674a1ce4d45dd51b97bb665a2da286110"],"repoTags":[],"size":"1637643"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba
08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":["registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"57615158"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":["registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27
.3"],"size":"68099991"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef","docker.io/library/nginx@sha256:b02b0565e769314abcf0be98f78cb473bcf0a2280c11fd01a13f0043a62e5059"],"repoTags":["docker.io/library/nginx:latest"],"size":"196441873"},{"id":"f1e0a733d98dd2ea6f841069b5ec8b30960cd356b8ec2912
85dba6c2cf2ffd06","repoDigests":["localhost/my-image@sha256:0799e7e9c80e96720f948fcc792aa9e32e00e9e00cede747bd11cd61abe5c9a4"],"repoTags":["localhost/my-image:functional-926032"],"size":"1640226"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd0
61d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-926032"],"size":"34114467"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-926032 image ls --format json --alsologtostderr:
I0717 23:48:57.713804 1833288 out.go:296] Setting OutFile to fd 1 ...
I0717 23:48:57.713959 1833288 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:57.713967 1833288 out.go:309] Setting ErrFile to fd 2...
I0717 23:48:57.713972 1833288 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:57.714474 1833288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
I0717 23:48:57.715096 1833288 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:57.715216 1833288 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:57.715700 1833288 cli_runner.go:164] Run: docker container inspect functional-926032 --format={{.State.Status}}
I0717 23:48:57.735485 1833288 ssh_runner.go:195] Run: systemctl --version
I0717 23:48:57.735537 1833288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-926032
I0717 23:48:57.765410 1833288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34673 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/functional-926032/id_rsa Username:docker}
I0717 23:48:57.860245 1833288 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-926032 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "108667702"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
- docker.io/library/nginx@sha256:b02b0565e769314abcf0be98f78cb473bcf0a2280c11fd01a13f0043a62e5059
repoTags:
- docker.io/library/nginx:latest
size: "196441873"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-926032
size: "34114467"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "68099991"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "116204496"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:40199b09f65752fed2a540913a037a7a2c3120bd9d4cf20e7d85caafa66381d8
repoTags:
- docker.io/library/nginx:alpine
size: "42812731"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "57615158"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-926032 image ls --format yaml --alsologtostderr:
I0717 23:48:54.529022 1832985 out.go:296] Setting OutFile to fd 1 ...
I0717 23:48:54.529250 1832985 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:54.529282 1832985 out.go:309] Setting ErrFile to fd 2...
I0717 23:48:54.529300 1832985 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:54.529614 1832985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
I0717 23:48:54.530225 1832985 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:54.530392 1832985 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:54.530926 1832985 cli_runner.go:164] Run: docker container inspect functional-926032 --format={{.State.Status}}
I0717 23:48:54.558523 1832985 ssh_runner.go:195] Run: systemctl --version
I0717 23:48:54.558577 1832985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-926032
I0717 23:48:54.586138 1832985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34673 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/functional-926032/id_rsa Username:docker}
I0717 23:48:54.708115 1832985 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh pgrep buildkitd: exit status 1 (370.213319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image build -t localhost/my-image:functional-926032 testdata/build --alsologtostderr
2023/07/17 23:48:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image build -t localhost/my-image:functional-926032 testdata/build --alsologtostderr: (2.365561505s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-926032 image build -t localhost/my-image:functional-926032 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 469625fe745
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-926032
--> f1e0a733d98
Successfully tagged localhost/my-image:functional-926032
f1e0a733d98dd2ea6f841069b5ec8b30960cd356b8ec291285dba6c2cf2ffd06
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-926032 image build -t localhost/my-image:functional-926032 testdata/build --alsologtostderr:
I0717 23:48:55.230710 1833100 out.go:296] Setting OutFile to fd 1 ...
I0717 23:48:55.231765 1833100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:55.231803 1833100 out.go:309] Setting ErrFile to fd 2...
I0717 23:48:55.231826 1833100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 23:48:55.232157 1833100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
I0717 23:48:55.232827 1833100 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:55.233550 1833100 config.go:182] Loaded profile config "functional-926032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 23:48:55.234190 1833100 cli_runner.go:164] Run: docker container inspect functional-926032 --format={{.State.Status}}
I0717 23:48:55.253661 1833100 ssh_runner.go:195] Run: systemctl --version
I0717 23:48:55.253717 1833100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-926032
I0717 23:48:55.272607 1833100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34673 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/functional-926032/id_rsa Username:docker}
I0717 23:48:55.365062 1833100 build_images.go:151] Building image from path: /tmp/build.1441306448.tar
I0717 23:48:55.365143 1833100 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 23:48:55.376316 1833100 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1441306448.tar
I0717 23:48:55.381054 1833100 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1441306448.tar: stat -c "%s %y" /var/lib/minikube/build/build.1441306448.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1441306448.tar': No such file or directory
I0717 23:48:55.381091 1833100 ssh_runner.go:362] scp /tmp/build.1441306448.tar --> /var/lib/minikube/build/build.1441306448.tar (3072 bytes)
I0717 23:48:55.411526 1833100 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1441306448
I0717 23:48:55.423069 1833100 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1441306448 -xf /var/lib/minikube/build/build.1441306448.tar
I0717 23:48:55.434506 1833100 crio.go:297] Building image: /var/lib/minikube/build/build.1441306448
I0717 23:48:55.434647 1833100 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-926032 /var/lib/minikube/build/build.1441306448 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0717 23:48:57.495521 1833100 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-926032 /var/lib/minikube/build/build.1441306448 --cgroup-manager=cgroupfs: (2.060825871s)
I0717 23:48:57.495595 1833100 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1441306448
I0717 23:48:57.506574 1833100 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1441306448.tar
I0717 23:48:57.517204 1833100 build_images.go:207] Built localhost/my-image:functional-926032 from /tmp/build.1441306448.tar
I0717 23:48:57.517233 1833100 build_images.go:123] succeeded building to: functional-926032
I0717 23:48:57.517238 1833100 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.777816289s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-926032
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image load --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image load --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr: (5.607828835s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-926032 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-926032 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-9rkzl" [641f0ce8-f92e-472c-b18b-2cdf6409967f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-9rkzl" [641f0ce8-f92e-472c-b18b-2cdf6409967f] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.06558127s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image load --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image load --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr: (2.685910358s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.716286264s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-926032
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image load --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image load --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr: (3.964739489s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 service list -o json
functional_test.go:1493: Took "492.166516ms" to run "out/minikube-linux-arm64 -p functional-926032 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30940
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image save gcr.io/google-containers/addon-resizer:functional-926032 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image save gcr.io/google-containers/addon-resizer:functional-926032 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.055577603s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30940
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image rm gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.725305923s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-926032 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-926032 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-926032 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1829831: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-926032 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-926032 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-926032 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d671ba41-3ded-4dd1-9858-71825ce34767] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d671ba41-3ded-4dd1-9858-71825ce34767] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.021433458s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-926032
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 image save --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-926032 image save --daemon gcr.io/google-containers/addon-resizer:functional-926032 --alsologtostderr: (2.869249187s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-926032
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-926032 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.151.9 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-926032 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "351.41464ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "84.869598ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "333.242204ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "54.195231ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdany-port2023430335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689637718766964476" to /tmp/TestFunctionalparallelMountCmdany-port2023430335/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689637718766964476" to /tmp/TestFunctionalparallelMountCmdany-port2023430335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689637718766964476" to /tmp/TestFunctionalparallelMountCmdany-port2023430335/001/test-1689637718766964476
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.681996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 23:48 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 23:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 23:48 test-1689637718766964476
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh cat /mount-9p/test-1689637718766964476
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-926032 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ac7c7faa-343d-4655-a5ff-2ce112bf9f81] Pending
helpers_test.go:344: "busybox-mount" [ac7c7faa-343d-4655-a5ff-2ce112bf9f81] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ac7c7faa-343d-4655-a5ff-2ce112bf9f81] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ac7c7faa-343d-4655-a5ff-2ce112bf9f81] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01735858s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-926032 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdany-port2023430335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdspecific-port2693916102/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.928924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdspecific-port2693916102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh "sudo umount -f /mount-9p": exit status 1 (423.985399ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-926032 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdspecific-port2693916102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T" /mount1: exit status 1 (1.217566952s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-926032 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-926032 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-926032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1977746520/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.21s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-926032
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-926032
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-926032
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (88.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-856061 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 23:50:10.910512 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-856061 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m28.630543337s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (88.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons enable ingress --alsologtostderr -v=5
E0717 23:50:38.593335 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons enable ingress --alsologtostderr -v=5: (12.49089629s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-856061 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-098702 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0717 23:54:19.975295 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-098702 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m17.777857322s)
--- PASS: TestJSONOutput/start/Command (77.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-098702 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-098702 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-098702 --output=json --user=testUser
E0717 23:55:10.910095 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-098702 --output=json --user=testUser: (5.938374203s)
--- PASS: TestJSONOutput/stop/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-858866 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-858866 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.610787ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9224bb68-a032-46a9-8106-ffc9504c361f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-858866] minikube v1.31.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a452399a-9cf2-4981-9a4d-1b8c1ae499c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"3300c641-dda9-4c80-a0a5-ba3ccd2d464d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e17bb894-8ac5-4c52-b99a-244b30a813b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig"}}
	{"specversion":"1.0","id":"1c17ce5a-6599-4a6a-bfc0-ecbd2b6cd54c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube"}}
	{"specversion":"1.0","id":"a4af64e6-aded-4950-9a64-3625b3f9c84d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"209dcfcf-94f7-4ee5-a87c-c91cb50b2041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cf83a636-ae7d-49f0-a413-d438c42d3e31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-858866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-858866
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-000754 --network=
E0717 23:55:41.895504 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:55:42.852147 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:42.857708 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:42.867941 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:42.888172 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:42.928410 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:43.008691 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:43.169016 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:43.489544 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:44.130436 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:45.411207 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:47.975197 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:55:53.095518 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0717 23:56:03.336523 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-000754 --network=: (44.616932166s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-000754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-000754
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-000754: (2.135902578s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-159201 --network=bridge
E0717 23:56:23.816747 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-159201 --network=bridge: (31.007903244s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-159201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-159201
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-159201: (2.018232431s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.05s)

                                                
                                    
x
+
TestKicExistingNetwork (38.97s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-809447 --network=existing-network
E0717 23:57:04.776995 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-809447 --network=existing-network: (37.178267352s)
helpers_test.go:175: Cleaning up "existing-network-809447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-809447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-809447: (1.620580126s)
--- PASS: TestKicExistingNetwork (38.97s)

                                                
                                    
x
+
TestKicCustomSubnet (34.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-546733 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-546733 --subnet=192.168.60.0/24: (32.716178122s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-546733 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-546733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-546733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-546733: (2.134128588s)
--- PASS: TestKicCustomSubnet (34.87s)

                                                
                                    
x
+
TestKicStaticIP (34.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-816907 --static-ip=192.168.200.200
E0717 23:57:58.054441 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0717 23:58:25.736023 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-816907 --static-ip=192.168.200.200: (32.356094914s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-816907 ip
helpers_test.go:175: Cleaning up "static-ip-816907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-816907
E0717 23:58:26.697822 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-816907: (2.05729384s)
--- PASS: TestKicStaticIP (34.58s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-296823 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-296823 --driver=docker  --container-runtime=crio: (32.316557436s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-299383 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-299383 --driver=docker  --container-runtime=crio: (30.385360848s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-296823
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-299383
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-299383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-299383
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-299383: (1.950200571s)
helpers_test.go:175: Cleaning up "first-296823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-296823
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-296823: (1.931377778s)
--- PASS: TestMinikubeProfile (67.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-242423 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-242423 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.526716289s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-242423 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-244072 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-244072 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.314736346s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-244072 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-242423 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-242423 --alsologtostderr -v=5: (1.655862011s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-244072 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-244072
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-244072: (1.223218394s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-244072
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-244072: (6.96209409s)
--- PASS: TestMountStart/serial/RestartStopped (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-244072 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-451668 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0718 00:00:10.910111 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:00:42.852510 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:01:10.538091 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:01:33.953768 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-451668 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m5.030376297s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-451668 -- rollout status deployment/busybox: (3.760997125s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-d4jjr -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-qfp74 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-d4jjr -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-qfp74 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-d4jjr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-451668 -- exec busybox-67b7f59bb-qfp74 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (63.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-451668 -v 3 --alsologtostderr
E0718 00:02:58.054512 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-451668 -v 3 --alsologtostderr: (1m3.124293542s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (63.83s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp testdata/cp-test.txt multinode-451668:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1638667901/001/cp-test_multinode-451668.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668:/home/docker/cp-test.txt multinode-451668-m02:/home/docker/cp-test_multinode-451668_multinode-451668-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m02 "sudo cat /home/docker/cp-test_multinode-451668_multinode-451668-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668:/home/docker/cp-test.txt multinode-451668-m03:/home/docker/cp-test_multinode-451668_multinode-451668-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m03 "sudo cat /home/docker/cp-test_multinode-451668_multinode-451668-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp testdata/cp-test.txt multinode-451668-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1638667901/001/cp-test_multinode-451668-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668-m02:/home/docker/cp-test.txt multinode-451668:/home/docker/cp-test_multinode-451668-m02_multinode-451668.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668 "sudo cat /home/docker/cp-test_multinode-451668-m02_multinode-451668.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668-m02:/home/docker/cp-test.txt multinode-451668-m03:/home/docker/cp-test_multinode-451668-m02_multinode-451668-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m03 "sudo cat /home/docker/cp-test_multinode-451668-m02_multinode-451668-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp testdata/cp-test.txt multinode-451668-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1638667901/001/cp-test_multinode-451668-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668-m03:/home/docker/cp-test.txt multinode-451668:/home/docker/cp-test_multinode-451668-m03_multinode-451668.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668 "sudo cat /home/docker/cp-test_multinode-451668-m03_multinode-451668.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 cp multinode-451668-m03:/home/docker/cp-test.txt multinode-451668-m02:/home/docker/cp-test_multinode-451668-m03_multinode-451668-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 ssh -n multinode-451668-m02 "sudo cat /home/docker/cp-test_multinode-451668-m03_multinode-451668-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-451668 node stop m03: (1.252497148s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-451668 status: exit status 7 (525.838374ms)

                                                
                                                
-- stdout --
	multinode-451668
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-451668-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-451668-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr: exit status 7 (571.63853ms)

                                                
                                                
-- stdout --
	multinode-451668
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-451668-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-451668-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 00:03:38.715663 1879775 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:03:38.715829 1879775 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:03:38.715838 1879775 out.go:309] Setting ErrFile to fd 2...
	I0718 00:03:38.715844 1879775 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:03:38.716254 1879775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:03:38.716443 1879775 out.go:303] Setting JSON to false
	I0718 00:03:38.716520 1879775 mustload.go:65] Loading cluster: multinode-451668
	I0718 00:03:38.716566 1879775 notify.go:220] Checking for updates...
	I0718 00:03:38.719427 1879775 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:03:38.719455 1879775 status.go:255] checking status of multinode-451668 ...
	I0718 00:03:38.719939 1879775 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:03:38.738367 1879775 status.go:330] multinode-451668 host status = "Running" (err=<nil>)
	I0718 00:03:38.738389 1879775 host.go:66] Checking if "multinode-451668" exists ...
	I0718 00:03:38.738700 1879775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668
	I0718 00:03:38.760082 1879775 host.go:66] Checking if "multinode-451668" exists ...
	I0718 00:03:38.760495 1879775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:03:38.760551 1879775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668
	I0718 00:03:38.788964 1879775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34738 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668/id_rsa Username:docker}
	I0718 00:03:38.883743 1879775 ssh_runner.go:195] Run: systemctl --version
	I0718 00:03:38.889921 1879775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 00:03:38.904124 1879775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:03:38.977337 1879775 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-18 00:03:38.96682434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:03:38.977910 1879775 kubeconfig.go:92] found "multinode-451668" server: "https://192.168.58.2:8443"
	I0718 00:03:38.977931 1879775 api_server.go:166] Checking apiserver status ...
	I0718 00:03:38.977973 1879775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 00:03:38.991034 1879775 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	I0718 00:03:39.016835 1879775 api_server.go:182] apiserver freezer: "6:freezer:/docker/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/crio/crio-f38376f93d96ae4dff4ad235c074ec6d08ff57448fa949ffccc27333d010de11"
	I0718 00:03:39.016917 1879775 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/865d9e37b02c1a77b484f2287a980e8a32e41c2b0e7dc6accbc61f8116fda149/crio/crio-f38376f93d96ae4dff4ad235c074ec6d08ff57448fa949ffccc27333d010de11/freezer.state
	I0718 00:03:39.028785 1879775 api_server.go:204] freezer state: "THAWED"
	I0718 00:03:39.028811 1879775 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0718 00:03:39.038829 1879775 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0718 00:03:39.038865 1879775 status.go:421] multinode-451668 apiserver status = Running (err=<nil>)
	I0718 00:03:39.038888 1879775 status.go:257] multinode-451668 status: &{Name:multinode-451668 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 00:03:39.038931 1879775 status.go:255] checking status of multinode-451668-m02 ...
	I0718 00:03:39.039316 1879775 cli_runner.go:164] Run: docker container inspect multinode-451668-m02 --format={{.State.Status}}
	I0718 00:03:39.058204 1879775 status.go:330] multinode-451668-m02 host status = "Running" (err=<nil>)
	I0718 00:03:39.058231 1879775 host.go:66] Checking if "multinode-451668-m02" exists ...
	I0718 00:03:39.058610 1879775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-451668-m02
	I0718 00:03:39.076751 1879775 host.go:66] Checking if "multinode-451668-m02" exists ...
	I0718 00:03:39.077066 1879775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 00:03:39.077114 1879775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-451668-m02
	I0718 00:03:39.097607 1879775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34743 SSHKeyPath:/home/jenkins/minikube-integration/16899-1800837/.minikube/machines/multinode-451668-m02/id_rsa Username:docker}
	I0718 00:03:39.188921 1879775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 00:03:39.203830 1879775 status.go:257] multinode-451668-m02 status: &{Name:multinode-451668-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0718 00:03:39.203865 1879775 status.go:255] checking status of multinode-451668-m03 ...
	I0718 00:03:39.204178 1879775 cli_runner.go:164] Run: docker container inspect multinode-451668-m03 --format={{.State.Status}}
	I0718 00:03:39.223409 1879775 status.go:330] multinode-451668-m03 host status = "Stopped" (err=<nil>)
	I0718 00:03:39.223431 1879775 status.go:343] host is not running, skipping remaining checks
	I0718 00:03:39.223438 1879775 status.go:257] multinode-451668-m03 status: &{Name:multinode-451668-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-451668 node start m03 --alsologtostderr: (11.60890866s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-451668
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-451668
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-451668: (25.021973054s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-451668 --wait=true -v=8 --alsologtostderr
E0718 00:05:10.910739 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:05:42.852380 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-451668 --wait=true -v=8 --alsologtostderr: (1m35.364256328s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-451668
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-451668 node delete m03: (4.325687907s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-451668 stop: (23.822079049s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-451668 status: exit status 7 (91.288233ms)

                                                
                                                
-- stdout --
	multinode-451668
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-451668-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr: exit status 7 (89.577144ms)

                                                
                                                
-- stdout --
	multinode-451668
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-451668-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 00:06:21.235681 1887892 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:06:21.235853 1887892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:06:21.235880 1887892 out.go:309] Setting ErrFile to fd 2...
	I0718 00:06:21.235900 1887892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:06:21.236192 1887892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:06:21.236401 1887892 out.go:303] Setting JSON to false
	I0718 00:06:21.236489 1887892 mustload.go:65] Loading cluster: multinode-451668
	I0718 00:06:21.236566 1887892 notify.go:220] Checking for updates...
	I0718 00:06:21.237007 1887892 config.go:182] Loaded profile config "multinode-451668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:06:21.237020 1887892 status.go:255] checking status of multinode-451668 ...
	I0718 00:06:21.237550 1887892 cli_runner.go:164] Run: docker container inspect multinode-451668 --format={{.State.Status}}
	I0718 00:06:21.256739 1887892 status.go:330] multinode-451668 host status = "Stopped" (err=<nil>)
	I0718 00:06:21.256762 1887892 status.go:343] host is not running, skipping remaining checks
	I0718 00:06:21.256769 1887892 status.go:257] multinode-451668 status: &{Name:multinode-451668 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 00:06:21.256796 1887892 status.go:255] checking status of multinode-451668-m02 ...
	I0718 00:06:21.257179 1887892 cli_runner.go:164] Run: docker container inspect multinode-451668-m02 --format={{.State.Status}}
	I0718 00:06:21.274759 1887892 status.go:330] multinode-451668-m02 host status = "Stopped" (err=<nil>)
	I0718 00:06:21.274785 1887892 status.go:343] host is not running, skipping remaining checks
	I0718 00:06:21.274795 1887892 status.go:257] multinode-451668-m02 status: &{Name:multinode-451668-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-451668 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-451668 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m29.056556619s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-451668 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-451668
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-451668-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-451668-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.567379ms)

                                                
                                                
-- stdout --
	* [multinode-451668-m02] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-451668-m02' is duplicated with machine name 'multinode-451668-m02' in profile 'multinode-451668'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-451668-m03 --driver=docker  --container-runtime=crio
E0718 00:07:58.054454 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-451668-m03 --driver=docker  --container-runtime=crio: (34.331610503s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-451668
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-451668: exit status 80 (327.091076ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-451668
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-451668-m03 already exists in multinode-451668-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-451668-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-451668-m03: (1.999627397s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.81s)

                                                
                                    
x
+
TestPreload (171.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-789642 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0718 00:09:21.096908 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-789642 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.788436767s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-789642 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-789642 image pull gcr.io/k8s-minikube/busybox: (2.139760046s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-789642
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-789642: (5.847824316s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-789642 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0718 00:10:10.910170 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:10:42.852152 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-789642 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m14.093708116s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-789642 image list
helpers_test.go:175: Cleaning up "test-preload-789642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-789642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-789642: (2.394351828s)
--- PASS: TestPreload (171.50s)

                                                
                                    
x
+
TestScheduledStopUnix (109.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-384185 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-384185 --memory=2048 --driver=docker  --container-runtime=crio: (33.033864303s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-384185 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-384185 -n scheduled-stop-384185
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-384185 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-384185 --cancel-scheduled
E0718 00:12:05.900234 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-384185 -n scheduled-stop-384185
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-384185
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-384185 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0718 00:12:58.054506 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-384185
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-384185: exit status 7 (78.567761ms)

                                                
                                                
-- stdout --
	scheduled-stop-384185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-384185 -n scheduled-stop-384185
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-384185 -n scheduled-stop-384185: exit status 7 (71.282542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-384185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-384185
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-384185: (4.722607746s)
--- PASS: TestScheduledStopUnix (109.40s)

                                                
                                    
x
+
TestInsufficientStorage (13.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-477699 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-477699 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.543715308s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a0966780-d0b5-4867-8897-b5f048c29430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-477699] minikube v1.31.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc9a7e68-9b2b-4da2-b0d3-3a03511a87a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"b1ad22ea-dcae-41eb-a315-06af462e29ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7cebdad0-88c1-419e-b83e-279f905bc941","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig"}}
	{"specversion":"1.0","id":"6a1a25f4-4e7d-47c6-9a1c-a8d65527ed91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube"}}
	{"specversion":"1.0","id":"1519b9e3-1f8f-4fbd-9817-f939fdd6b2c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6d6e2617-97aa-420a-b4a0-a57becafe7cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"17fbd1ff-0af3-4f77-a4ec-36ba1514810b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bb1669b0-2f49-487e-ab46-ea5831ca3a78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3c663e91-6335-4299-a06f-845c8ac5d6b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b290009-2bf0-4b65-811b-e9da06d2f326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a62a0415-b175-4288-9f75-8c9d4923725f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-477699 in cluster insufficient-storage-477699","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"516993f6-29a6-4a04-b83e-3722f4f1fdb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"db961372-6f61-4577-8cad-f543003a44ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"215e5e1f-f901-4e02-a9ca-bc9a66539586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-477699 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-477699 --output=json --layout=cluster: exit status 7 (310.049475ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-477699","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-477699","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 00:13:25.980518 1904776 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-477699" does not appear in /home/jenkins/minikube-integration/16899-1800837/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-477699 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-477699 --output=json --layout=cluster: exit status 7 (325.558636ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-477699","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-477699","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 00:13:26.305617 1904829 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-477699" does not appear in /home/jenkins/minikube-integration/16899-1800837/kubeconfig
	E0718 00:13:26.319226 1904829 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/insufficient-storage-477699/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-477699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-477699
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-477699: (1.926308444s)
--- PASS: TestInsufficientStorage (13.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (381.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.09683289s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-425792
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-425792: (1.284162011s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-425792 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-425792 status --format={{.Host}}: exit status 7 (67.674873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0718 00:17:58.054130 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:18:13.954346 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.024748807s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-425792 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (85.953862ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-425792] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-425792
	    minikube start -p kubernetes-upgrade-425792 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4257922 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-425792 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0718 00:22:58.054189 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-425792 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.506384462s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-425792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-425792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-425792: (2.283002611s)
--- PASS: TestKubernetesUpgrade (381.46s)

                                                
                                    
x
+
TestPause/serial/Start (87.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-060034 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-060034 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m27.844347958s)
--- PASS: TestPause/serial/Start (87.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608533 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-608533 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (88.050784ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-608533] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608533 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608533 --driver=docker  --container-runtime=crio: (44.019897746s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-608533 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608533 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608533 --no-kubernetes --driver=docker  --container-runtime=crio: (4.269615812s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-608533 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-608533 status -o json: exit status 2 (338.662783ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-608533","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-608533
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-608533: (1.982452455s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608533 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608533 --no-kubernetes --driver=docker  --container-runtime=crio: (9.283474599s)
--- PASS: TestNoKubernetes/serial/Start (9.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-608533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-608533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (309.615026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-608533
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-608533: (1.249512078s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608533 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608533 --driver=docker  --container-runtime=crio: (7.586406078s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-608533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-608533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.180538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-483744 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-483744 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (208.936077ms)

                                                
                                                
-- stdout --
	* [false-483744] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 00:14:44.370620 1913735 out.go:296] Setting OutFile to fd 1 ...
	I0718 00:14:44.370753 1913735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:14:44.370763 1913735 out.go:309] Setting ErrFile to fd 2...
	I0718 00:14:44.370769 1913735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0718 00:14:44.371054 1913735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-1800837/.minikube/bin
	I0718 00:14:44.371470 1913735 out.go:303] Setting JSON to false
	I0718 00:14:44.372565 1913735 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32229,"bootTime":1689607056,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0718 00:14:44.372636 1913735 start.go:138] virtualization:  
	I0718 00:14:44.374958 1913735 out.go:177] * [false-483744] minikube v1.31.0 on Ubuntu 20.04 (arm64)
	I0718 00:14:44.377619 1913735 notify.go:220] Checking for updates...
	I0718 00:14:44.381048 1913735 out.go:177]   - MINIKUBE_LOCATION=16899
	I0718 00:14:44.382706 1913735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 00:14:44.393305 1913735 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-1800837/kubeconfig
	I0718 00:14:44.395346 1913735 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-1800837/.minikube
	I0718 00:14:44.397110 1913735 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0718 00:14:44.399132 1913735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 00:14:44.401480 1913735 config.go:182] Loaded profile config "pause-060034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0718 00:14:44.401637 1913735 driver.go:373] Setting default libvirt URI to qemu:///system
	I0718 00:14:44.426308 1913735 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0718 00:14:44.426429 1913735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 00:14:44.517884 1913735 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-18 00:14:44.507782036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0718 00:14:44.517990 1913735 docker.go:294] overlay module found
	I0718 00:14:44.520096 1913735 out.go:177] * Using the docker driver based on user configuration
	I0718 00:14:44.521807 1913735 start.go:298] selected driver: docker
	I0718 00:14:44.521823 1913735 start.go:880] validating driver "docker" against <nil>
	I0718 00:14:44.521848 1913735 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 00:14:44.524548 1913735 out.go:177] 
	W0718 00:14:44.526338 1913735 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0718 00:14:44.528306 1913735 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-483744 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-483744" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 18 Jul 2023 00:14:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-060034
contexts:
- context:
cluster: pause-060034
extensions:
- extension:
last-update: Tue, 18 Jul 2023 00:14:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: context_info
namespace: default
user: pause-060034
name: pause-060034
current-context: pause-060034
kind: Config
preferences: {}
users:
- name: pause-060034
user:
client-certificate: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/pause-060034/client.crt
client-key: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/pause-060034/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-483744

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-483744"

                                                
                                                
----------------------- debugLogs end: false-483744 [took: 3.413614549s] --------------------------------
helpers_test.go:175: Cleaning up "false-483744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-483744
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-060034 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0718 00:15:10.910015 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-060034 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.091197442s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.12s)

                                                
                                    
x
+
TestPause/serial/Pause (1.26s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-060034 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-060034 --alsologtostderr -v=5: (1.258677196s)
--- PASS: TestPause/serial/Pause (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.91s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-060034 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-060034 --output=json --layout=cluster: exit status 2 (909.172807ms)

                                                
                                                
-- stdout --
	{"Name":"pause-060034","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-060034","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.91s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.35s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-060034 --alsologtostderr -v=5
E0718 00:15:42.853137 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-060034 --alsologtostderr -v=5: (1.347801544s)
--- PASS: TestPause/serial/Unpause (1.35s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-060034 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-060034 --alsologtostderr -v=5: (1.571727199s)
--- PASS: TestPause/serial/PauseAgain (1.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-060034 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-060034 --alsologtostderr -v=5: (2.971820485s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-060034
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-060034: exit status 1 (28.597916ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-060034: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.401221572s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-954789
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0718 00:25:10.910214 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.318854456s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-483744 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hzz47" [2a79ccf8-896b-4444-aecc-cc96d4381ae9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0718 00:25:42.853360 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-hzz47" [2a79ccf8-896b-4444-aecc-cc96d4381ae9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009227177s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qgpk6" [1dc7c510-19a6-434c-be02-c78a2fb131de] Running
E0718 00:26:01.097075 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.038192633s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-483744 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-7hdbq" [18e9b90f-a311-44ee-bfb6-a53b2c729fbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-7hdbq" [18e9b90f-a311-44ee-bfb6-a53b2c729fbe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.01279902s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m20.073747445s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.914424841s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9drdg" [1b3c40b8-6e6e-4f5c-b459-baecd715d886] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.045560177s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-483744 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nk45g" [d01788f3-1646-4870-8c8d-d90678ba4b86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-nk45g" [d01788f3-1646-4870-8c8d-d90678ba4b86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007049312s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-483744 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-x4rb8" [e9ad1b05-f5e5-45f2-96a7-5fc204fd59ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-x4rb8" [e9ad1b05-f5e5-45f2-96a7-5fc204fd59ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011423544s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m32.710772727s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0718 00:28:45.902534 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m8.389622404s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l8nqv" [e63ae780-bdc8-4385-b25f-796abbd906e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.029113738s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-483744 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-8tqr4" [ecbca1eb-9300-4724-a602-cfba70d622dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-8tqr4" [ecbca1eb-9300-4724-a602-cfba70d622dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007552276s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-483744 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-glfbq" [3e7b9a52-fb6e-4cc5-ade8-1ae233ee376f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-glfbq" [3e7b9a52-fb6e-4cc5-ade8-1ae233ee376f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.022364784s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-483744 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m34.785077905s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-336902 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0718 00:30:41.569198 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:41.574459 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:41.585064 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:41.605414 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:41.645689 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:41.726328 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:41.886706 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:42.207375 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:42.848265 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:42.852616 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:30:44.128607 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:46.690554 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:51.811042 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:30:59.630579 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:30:59.635808 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:30:59.646123 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:30:59.666357 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:30:59.706550 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:30:59.786733 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:30:59.947209 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:00.267540 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:00.908555 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:02.051248 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:31:02.189429 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:04.750058 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:09.870264 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:20.111082 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:31:22.532039 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:31:40.591235 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-336902 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m8.768467223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-483744 "pgrep -a kubelet"
E0718 00:32:03.492763 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-483744 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pbrn9" [e67182cd-5096-4d85-a75d-3157bb90bdc3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-pbrn9" [e67182cd-5096-4d85-a75d-3157bb90bdc3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.011390729s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-483744 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-483744 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)
E0718 00:50:10.656871 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:50:10.910449 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:50:41.568787 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:50:42.852696 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:50:59.630667 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:51:11.054519 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:51:13.611777 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:51:32.577304 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:51:33.955508 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:52:04.085009 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:52:36.031422 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:52:43.977682 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:52:58.054348 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:53:00.911545 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:53:27.129778 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:53:48.735962 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:54:16.417730 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:54:48.009254 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:54:50.567659 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:55:10.910138 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:55:41.569478 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:55:42.852167 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:55:59.631024 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-378585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:32:38.635612 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:32:41.196278 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-378585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m9.883209862s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-336902 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6704137-7f31-46cd-a87c-56d6f1e8f5b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0718 00:32:46.316431 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f6704137-7f31-46cd-a87c-56d6f1e8f5b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.029660172s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-336902 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-336902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0718 00:32:56.557252 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-336902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.681704641s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-336902 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-336902 --alsologtostderr -v=3
E0718 00:32:58.054199 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:33:00.911827 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:00.917053 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:00.927299 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:00.947565 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:00.987807 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:01.068043 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:01.228385 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:01.549213 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:02.189429 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:03.470057 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:06.031123 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-336902 --alsologtostderr -v=3: (13.145173039s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-336902 -n old-k8s-version-336902
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-336902 -n old-k8s-version-336902: exit status 7 (105.582992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-336902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (435.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-336902 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0718 00:33:11.151608 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:17.037924 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:33:21.391991 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:25.413669 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:33:41.872834 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:33:43.472534 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-336902 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m14.80388737s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-336902 -n old-k8s-version-336902
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (435.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-378585 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [630285d6-a70b-46b0-93dd-3612b94a681c] Pending
helpers_test.go:344: "busybox" [630285d6-a70b-46b0-93dd-3612b94a681c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [630285d6-a70b-46b0-93dd-3612b94a681c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.037407889s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-378585 exec busybox -- /bin/sh -c "ulimit -n"
E0718 00:33:57.998154 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-378585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-378585 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.132844304s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-378585 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-378585 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-378585 --alsologtostderr -v=3: (12.021029238s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-378585 -n no-preload-378585
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-378585 -n no-preload-378585: exit status 7 (75.628138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-378585 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (626.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-378585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:34:22.833808 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:34:48.009743 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.015042 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.025811 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.046070 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.086501 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.166943 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.327362 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:48.647895 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:49.288572 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:50.568209 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:50.569245 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:50.573441 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:50.583731 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:50.603967 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:50.644239 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:50.724672 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:50.885104 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:51.205511 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:51.845729 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:53.126584 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:53.129798 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:34:53.955124 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:34:55.687496 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:34:58.250350 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:35:00.808016 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:35:08.490780 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:35:10.910688 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:35:11.049008 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:35:19.918694 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:35:28.971302 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:35:31.529471 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:35:41.569688 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:35:42.852156 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:35:44.753946 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:35:59.631101 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:36:09.254727 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:36:09.932464 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:36:12.489885 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:36:27.312847 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:37:04.085879 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.091931 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.102270 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.122518 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.162771 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.243125 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.403615 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:04.724162 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:05.364306 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:06.644974 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:09.205173 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:14.326263 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:24.566541 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:31.853364 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:37:34.410723 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:37:36.031659 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:37:45.047703 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:37:58.054785 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:38:00.911373 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:38:03.758930 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:38:26.007949 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:38:28.594335 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:39:47.928521 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:39:48.009157 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:39:50.567936 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
E0718 00:40:10.910082 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:40:15.694369 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:40:18.251551 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-378585 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (10m26.58079936s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-378585 -n no-preload-378585
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (626.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8xl4z" [bec9feaf-b77e-471e-9d73-68bc816bd66f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02386315s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8xl4z" [bec9feaf-b77e-471e-9d73-68bc816bd66f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005698905s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-336902 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-336902 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-336902 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-336902 -n old-k8s-version-336902
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-336902 -n old-k8s-version-336902: exit status 2 (344.668646ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-336902 -n old-k8s-version-336902
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-336902 -n old-k8s-version-336902: exit status 2 (365.896882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-336902 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-336902 -n old-k8s-version-336902
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-336902 -n old-k8s-version-336902
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-378337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:40:42.852573 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:40:59.630364 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-378337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m16.311998892s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-378337 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [815c0ea5-8017-4655-a234-432e1ac11a5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [815c0ea5-8017-4655-a234-432e1ac11a5c] Running
E0718 00:42:04.085662 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.034960069s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-378337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-378337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-378337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083020864s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-378337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-378337 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-378337 --alsologtostderr -v=3: (12.144905415s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-378337 -n embed-certs-378337
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-378337 -n embed-certs-378337: exit status 7 (71.578036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-378337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (355.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-378337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:42:31.768677 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:42:36.031006 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:42:41.097753 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:42:43.976811 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:43.982091 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:43.992379 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:44.012574 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:44.052814 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:44.133208 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:44.293445 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:44.613901 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:45.254123 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:46.535329 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:49.096409 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:54.217187 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:42:58.053998 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:43:00.911642 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:43:04.457398 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:43:24.938378 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:44:05.898528 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-378337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m55.174640809s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-378337 -n embed-certs-378337
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (355.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-sfvqd" [8647ac70-e679-4732-a6d1-c37d21766efd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023953105s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-sfvqd" [8647ac70-e679-4732-a6d1-c37d21766efd] Running
E0718 00:44:48.009201 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007288663s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-378585 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-378585 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-378585 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-378585 -n no-preload-378585
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-378585 -n no-preload-378585: exit status 2 (357.6478ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-378585 -n no-preload-378585
E0718 00:44:50.567624 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-378585 -n no-preload-378585: exit status 2 (363.062746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-378585 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-378585 -n no-preload-378585
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-378585 -n no-preload-378585
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-621366 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:45:10.910403 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/addons-579349/client.crt: no such file or directory
E0718 00:45:25.902898 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:45:27.818718 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:45:41.568796 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:45:42.852707 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/ingress-addon-legacy-856061/client.crt: no such file or directory
E0718 00:45:59.630509 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-621366 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m14.983223904s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-621366 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5d96bcb7-c91f-48fc-ac37-f22ea0c5da66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5d96bcb7-c91f-48fc-ac37-f22ea0c5da66] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.030749257s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-621366 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-621366 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-621366 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.178461246s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-621366 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-621366 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-621366 --alsologtostderr -v=3: (12.085040512s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366: exit status 7 (70.334011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-621366 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (618.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-621366 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:47:04.085924 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
E0718 00:47:04.615538 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/auto-483744/client.crt: no such file or directory
E0718 00:47:22.673436 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/kindnet-483744/client.crt: no such file or directory
E0718 00:47:36.031655 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:47:43.977484 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
E0718 00:47:58.054189 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/functional-926032/client.crt: no such file or directory
E0718 00:48:00.911192 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
E0718 00:48:11.659381 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/old-k8s-version-336902/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-621366 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (10m18.503104974s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (618.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xs74f" [8cc02373-0fad-4459-b5d3-74ff161b3572] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xs74f" [8cc02373-0fad-4459-b5d3-74ff161b3572] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.030041638s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xs74f" [8cc02373-0fad-4459-b5d3-74ff161b3572] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007315271s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-378337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-378337 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-378337 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-378337 -n embed-certs-378337
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-378337 -n embed-certs-378337: exit status 2 (376.077786ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-378337 -n embed-certs-378337
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-378337 -n embed-certs-378337: exit status 2 (351.400939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-378337 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-378337 -n embed-certs-378337
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-378337 -n embed-certs-378337
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-398055 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:48:48.736044 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:48.741341 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:48.751619 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:48.771942 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:48.812314 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:48.892619 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:49.052998 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:49.373588 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:50.014106 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:51.294600 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:53.855197 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:58.975755 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:48:59.119078 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/calico-483744/client.crt: no such file or directory
E0718 00:49:09.215949 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
E0718 00:49:23.955194 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/custom-flannel-483744/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-398055 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (48.262057538s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-398055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-398055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.152911771s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-398055 --alsologtostderr -v=3
E0718 00:49:29.696129 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/no-preload-378585/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-398055 --alsologtostderr -v=3: (1.365176072s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-398055 -n newest-cni-398055
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-398055 -n newest-cni-398055: exit status 7 (74.770605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-398055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-398055 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0718 00:49:48.012733 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/flannel-483744/client.crt: no such file or directory
E0718 00:49:50.567558 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/enable-default-cni-483744/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-398055 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (30.156459731s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-398055 -n newest-cni-398055
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-398055 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-398055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-398055 -n newest-cni-398055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-398055 -n newest-cni-398055: exit status 2 (357.775751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-398055 -n newest-cni-398055
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-398055 -n newest-cni-398055: exit status 2 (361.944329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-398055 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-398055 -n newest-cni-398055
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-398055 -n newest-cni-398055
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-jh8rg" [9e808dc8-867a-4513-97b2-bcb34213faf2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0223486s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-jh8rg" [9e808dc8-867a-4513-97b2-bcb34213faf2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00722946s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-621366 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-621366 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-621366 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366
E0718 00:57:04.085358 1806226 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/bridge-483744/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366: exit status 2 (338.935534ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366: exit status 2 (339.459754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-621366 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-621366 -n default-k8s-diff-port-621366
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)

                                                
                                    

Test skip (29/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-897229 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-897229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-897229
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-483744 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-483744" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 18 Jul 2023 00:14:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-060034
contexts:
- context:
cluster: pause-060034
extensions:
- extension:
last-update: Tue, 18 Jul 2023 00:14:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: context_info
namespace: default
user: pause-060034
name: pause-060034
current-context: pause-060034
kind: Config
preferences: {}
users:
- name: pause-060034
user:
client-certificate: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/pause-060034/client.crt
client-key: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/pause-060034/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-483744

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-483744"

                                                
                                                
----------------------- debugLogs end: kubenet-483744 [took: 3.38748897s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-483744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-483744
--- SKIP: TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-483744 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-483744" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-1800837/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 18 Jul 2023 00:14:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-060034
contexts:
- context:
cluster: pause-060034
extensions:
- extension:
last-update: Tue, 18 Jul 2023 00:14:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.0
name: context_info
namespace: default
user: pause-060034
name: pause-060034
current-context: pause-060034
kind: Config
preferences: {}
users:
- name: pause-060034
user:
client-certificate: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/pause-060034/client.crt
client-key: /home/jenkins/minikube-integration/16899-1800837/.minikube/profiles/pause-060034/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-483744

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-483744" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-483744"

                                                
                                                
----------------------- debugLogs end: cilium-483744 [took: 3.861692571s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-483744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-483744
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-156423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-156423
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard