Test Report: Docker_Linux_crio_arm64 16634

                    
                      6805b8e24f6b638eafcb6b686650007fe9811ef1:2023-06-05:29567
                    
                

Test fail (9/296)

Order failed test Duration
24 TestAddons/parallel/Registry 180.81
25 TestAddons/parallel/Ingress 168.11
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.33
202 TestMultiNode/serial/PingHostFrom2Pods 4.73
217 TestPreload 183
223 TestRunningBinaryUpgrade 71.94
226 TestMissingContainerUpgrade 108.65
238 TestStoppedBinaryUpgrade/Upgrade 141.9
249 TestPause/serial/SecondStartNoReconfiguration 52.1
x
+
TestAddons/parallel/Registry (180.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 70.965922ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d94xj" [3b4e0792-a45f-41f1-911a-36c1609f1e26] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017345191s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6c5b7" [542106f4-ef94-45fe-8183-768a7d7b500f] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.019686563s
addons_test.go:316: (dbg) Run:  kubectl --context addons-735995 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-735995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-735995 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.774424498s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 ip
2023/06/05 17:34:02 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:34:02 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:02 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:34:03 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:03 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:361: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-735995
helpers_test.go:235: (dbg) docker inspect addons-735995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee",
	        "Created": "2023-06-05T17:31:22.465496878Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 408780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T17:31:22.784389528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/hostname",
	        "HostsPath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/hosts",
	        "LogPath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee-json.log",
	        "Name": "/addons-735995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-735995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-735995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-735995",
	                "Source": "/var/lib/docker/volumes/addons-735995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-735995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-735995",
	                "name.minikube.sigs.k8s.io": "addons-735995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3286cc24a1af956fce6ba6162fcacaa3d0c7bb789e5ed3106b69f6620cc75322",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3286cc24a1af",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-735995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d36a4170624d",
	                        "addons-735995"
	                    ],
	                    "NetworkID": "0b90d709d07d267fe9ad697ed6f8beb09db82befd8b2368e245ec4b456227819",
	                    "EndpointID": "2ce097cda11197b55e493209eedc5dbbd5670bb6c3b2e221f950bbccfbd35e31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-735995 -n addons-735995
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-735995 logs -n 25: (2.151399817s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | -p download-only-535520        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | -p download-only-535520        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| delete  | -p download-only-535520        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| delete  | -p download-only-535520        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| start   | --download-only -p             | download-docker-501309 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | download-docker-501309         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-501309      | download-docker-501309 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| start   | --download-only -p             | binary-mirror-444845   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | binary-mirror-444845           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43845         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-444845        | binary-mirror-444845   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| start   | -p addons-735995               | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:33 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:33 UTC | 05 Jun 23 17:33 UTC |
	|         | addons-735995                  |                        |         |         |                     |                     |
	| addons  | addons-735995 addons           | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:33 UTC | 05 Jun 23 17:33 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-735995 ip               | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:34 UTC | 05 Jun 23 17:34 UTC |
	| addons  | disable inspektor-gadget -p    | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:34 UTC | 05 Jun 23 17:34 UTC |
	|         | addons-735995                  |                        |         |         |                     |                     |
	| ssh     | addons-735995 ssh curl -s      | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-735995 ip               | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:36 UTC | 05 Jun 23 17:36 UTC |
	| addons  | addons-735995 addons disable   | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:36 UTC | 05 Jun 23 17:36 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 17:30:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 17:30:59.326987  408313 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:30:59.327145  408313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:59.327155  408313 out.go:309] Setting ErrFile to fd 2...
	I0605 17:30:59.327161  408313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:59.327320  408313 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:30:59.327767  408313 out.go:303] Setting JSON to false
	I0605 17:30:59.328844  408313 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7992,"bootTime":1685978268,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:30:59.328914  408313 start.go:137] virtualization:  
	I0605 17:30:59.331813  408313 out.go:177] * [addons-735995] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:30:59.334799  408313 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 17:30:59.336710  408313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:30:59.334995  408313 notify.go:220] Checking for updates...
	I0605 17:30:59.340951  408313 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:30:59.343111  408313 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:30:59.345022  408313 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 17:30:59.346828  408313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 17:30:59.349158  408313 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:30:59.375159  408313 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:30:59.375251  408313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:59.453762  408313 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-05 17:30:59.442899443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:59.453870  408313 docker.go:294] overlay module found
	I0605 17:30:59.456139  408313 out.go:177] * Using the docker driver based on user configuration
	I0605 17:30:59.457898  408313 start.go:297] selected driver: docker
	I0605 17:30:59.457936  408313 start.go:875] validating driver "docker" against <nil>
	I0605 17:30:59.457964  408313 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 17:30:59.458608  408313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:59.529284  408313 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-05 17:30:59.519742281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:59.529453  408313 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0605 17:30:59.529683  408313 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 17:30:59.531469  408313 out.go:177] * Using Docker driver with root privileges
	I0605 17:30:59.533447  408313 cni.go:84] Creating CNI manager for ""
	I0605 17:30:59.533462  408313 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:30:59.533472  408313 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0605 17:30:59.533491  408313 start_flags.go:319] config:
	{Name:addons-735995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:30:59.535686  408313 out.go:177] * Starting control plane node addons-735995 in cluster addons-735995
	I0605 17:30:59.537248  408313 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:30:59.538768  408313 out.go:177] * Pulling base image ...
	I0605 17:30:59.540391  408313 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:30:59.540448  408313 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 17:30:59.540460  408313 cache.go:57] Caching tarball of preloaded images
	I0605 17:30:59.540467  408313 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:30:59.540540  408313 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 17:30:59.540551  408313 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 17:30:59.540905  408313 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/config.json ...
	I0605 17:30:59.540937  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/config.json: {Name:mk3fe78a0ad294e23755d3263268d2e6984b6994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:30:59.557974  408313 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f to local cache
	I0605 17:30:59.558089  408313 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory
	I0605 17:30:59.558110  408313 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory, skipping pull
	I0605 17:30:59.558115  408313 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in cache, skipping pull
	I0605 17:30:59.558122  408313 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f as a tarball
	I0605 17:30:59.558127  408313 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f from local cache
	I0605 17:31:15.021849  408313 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f from cached tarball
	I0605 17:31:15.021886  408313 cache.go:195] Successfully downloaded all kic artifacts
	I0605 17:31:15.021936  408313 start.go:364] acquiring machines lock for addons-735995: {Name:mk0ceb74f7c7ec6a93eb00c47587bcbeb49c1769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 17:31:15.023182  408313 start.go:368] acquired machines lock for "addons-735995" in 1.21141ms
	I0605 17:31:15.023249  408313 start.go:93] Provisioning new machine with config: &{Name:addons-735995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:31:15.023348  408313 start.go:125] createHost starting for "" (driver="docker")
	I0605 17:31:15.026141  408313 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0605 17:31:15.026472  408313 start.go:159] libmachine.API.Create for "addons-735995" (driver="docker")
	I0605 17:31:15.026504  408313 client.go:168] LocalClient.Create starting
	I0605 17:31:15.026641  408313 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem
	I0605 17:31:15.495319  408313 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem
	I0605 17:31:15.868885  408313 cli_runner.go:164] Run: docker network inspect addons-735995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0605 17:31:15.889980  408313 cli_runner.go:211] docker network inspect addons-735995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0605 17:31:15.890061  408313 network_create.go:281] running [docker network inspect addons-735995] to gather additional debugging logs...
	I0605 17:31:15.890082  408313 cli_runner.go:164] Run: docker network inspect addons-735995
	W0605 17:31:15.908355  408313 cli_runner.go:211] docker network inspect addons-735995 returned with exit code 1
	I0605 17:31:15.908389  408313 network_create.go:284] error running [docker network inspect addons-735995]: docker network inspect addons-735995: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-735995 not found
	I0605 17:31:15.908401  408313 network_create.go:286] output of [docker network inspect addons-735995]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-735995 not found
	
	** /stderr **
	I0605 17:31:15.908481  408313 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:31:15.929143  408313 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400166a8e0}
	I0605 17:31:15.929186  408313 network_create.go:123] attempt to create docker network addons-735995 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0605 17:31:15.929242  408313 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-735995 addons-735995
	I0605 17:31:16.021244  408313 network_create.go:107] docker network addons-735995 192.168.49.0/24 created
	I0605 17:31:16.021276  408313 kic.go:117] calculated static IP "192.168.49.2" for the "addons-735995" container
	I0605 17:31:16.021362  408313 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0605 17:31:16.040896  408313 cli_runner.go:164] Run: docker volume create addons-735995 --label name.minikube.sigs.k8s.io=addons-735995 --label created_by.minikube.sigs.k8s.io=true
	I0605 17:31:16.059628  408313 oci.go:103] Successfully created a docker volume addons-735995
	I0605 17:31:16.059728  408313 cli_runner.go:164] Run: docker run --rm --name addons-735995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-735995 --entrypoint /usr/bin/test -v addons-735995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib
	I0605 17:31:18.191403  408313 cli_runner.go:217] Completed: docker run --rm --name addons-735995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-735995 --entrypoint /usr/bin/test -v addons-735995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib: (2.131623299s)
	I0605 17:31:18.191436  408313 oci.go:107] Successfully prepared a docker volume addons-735995
	I0605 17:31:18.191458  408313 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:31:18.191477  408313 kic.go:190] Starting extracting preloaded images to volume ...
	I0605 17:31:18.191563  408313 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-735995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir
	I0605 17:31:22.388961  408313 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-735995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir: (4.197339272s)
	I0605 17:31:22.388999  408313 kic.go:199] duration metric: took 4.197518 seconds to extract preloaded images to volume
	W0605 17:31:22.389149  408313 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0605 17:31:22.389264  408313 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0605 17:31:22.449364  408313 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-735995 --name addons-735995 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-735995 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-735995 --network addons-735995 --ip 192.168.49.2 --volume addons-735995:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f
	I0605 17:31:22.794143  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Running}}
	I0605 17:31:22.830592  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:31:22.857396  408313 cli_runner.go:164] Run: docker exec addons-735995 stat /var/lib/dpkg/alternatives/iptables
	I0605 17:31:22.957583  408313 oci.go:144] the created container "addons-735995" has a running status.
	I0605 17:31:22.957609  408313 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa...
	I0605 17:31:23.187062  408313 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0605 17:31:23.216782  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:31:23.255052  408313 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0605 17:31:23.255075  408313 kic_runner.go:114] Args: [docker exec --privileged addons-735995 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0605 17:31:23.350962  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:31:23.377023  408313 machine.go:88] provisioning docker machine ...
	I0605 17:31:23.377058  408313 ubuntu.go:169] provisioning hostname "addons-735995"
	I0605 17:31:23.377129  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:23.403693  408313 main.go:141] libmachine: Using SSH client type: native
	I0605 17:31:23.404158  408313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0605 17:31:23.404171  408313 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-735995 && echo "addons-735995" | sudo tee /etc/hostname
	I0605 17:31:23.404920  408313 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60084->127.0.0.1:33113: read: connection reset by peer
	I0605 17:31:26.559791  408313 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-735995
	
	I0605 17:31:26.559873  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:26.579289  408313 main.go:141] libmachine: Using SSH client type: native
	I0605 17:31:26.579725  408313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0605 17:31:26.579742  408313 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-735995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-735995/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-735995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 17:31:26.721528  408313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 17:31:26.721554  408313 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 17:31:26.721581  408313 ubuntu.go:177] setting up certificates
	I0605 17:31:26.721602  408313 provision.go:83] configureAuth start
	I0605 17:31:26.721674  408313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-735995
	I0605 17:31:26.740476  408313 provision.go:138] copyHostCerts
	I0605 17:31:26.740562  408313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 17:31:26.740686  408313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 17:31:26.740748  408313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 17:31:26.740794  408313 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.addons-735995 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-735995]
	I0605 17:31:28.545326  408313 provision.go:172] copyRemoteCerts
	I0605 17:31:28.545455  408313 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 17:31:28.545507  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:28.564191  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:28.667638  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 17:31:28.699391  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0605 17:31:28.734001  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 17:31:28.763699  408313 provision.go:86] duration metric: configureAuth took 2.0420591s
	I0605 17:31:28.763725  408313 ubuntu.go:193] setting minikube options for container-runtime
	I0605 17:31:28.763940  408313 config.go:182] Loaded profile config "addons-735995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:31:28.764043  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:28.782569  408313 main.go:141] libmachine: Using SSH client type: native
	I0605 17:31:28.783018  408313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0605 17:31:28.783042  408313 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 17:31:29.049202  408313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 17:31:29.049228  408313 machine.go:91] provisioned docker machine in 5.672186833s
	I0605 17:31:29.049238  408313 client.go:171] LocalClient.Create took 14.022724195s
	I0605 17:31:29.049250  408313 start.go:167] duration metric: libmachine.API.Create for "addons-735995" took 14.022779398s
	I0605 17:31:29.049257  408313 start.go:300] post-start starting for "addons-735995" (driver="docker")
	I0605 17:31:29.049267  408313 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 17:31:29.049333  408313 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 17:31:29.049383  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.068302  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.172863  408313 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 17:31:29.177506  408313 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 17:31:29.177556  408313 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 17:31:29.177567  408313 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 17:31:29.177580  408313 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 17:31:29.177593  408313 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 17:31:29.177668  408313 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 17:31:29.177694  408313 start.go:303] post-start completed in 128.427495ms
	I0605 17:31:29.178017  408313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-735995
	I0605 17:31:29.196617  408313 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/config.json ...
	I0605 17:31:29.196900  408313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:31:29.196950  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.214816  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.310396  408313 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 17:31:29.316254  408313 start.go:128] duration metric: createHost completed in 14.29287158s
	I0605 17:31:29.316278  408313 start.go:83] releasing machines lock for "addons-735995", held for 14.293062086s
	I0605 17:31:29.316354  408313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-735995
	I0605 17:31:29.337814  408313 ssh_runner.go:195] Run: cat /version.json
	I0605 17:31:29.337913  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.338180  408313 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 17:31:29.338241  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.358043  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.364023  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.452740  408313 ssh_runner.go:195] Run: systemctl --version
	I0605 17:31:29.598017  408313 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 17:31:29.747556  408313 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 17:31:29.753567  408313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:31:29.777352  408313 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 17:31:29.777431  408313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:31:29.817079  408313 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0605 17:31:29.817104  408313 start.go:481] detecting cgroup driver to use...
	I0605 17:31:29.817158  408313 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 17:31:29.817223  408313 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 17:31:29.836008  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 17:31:29.850278  408313 docker.go:193] disabling cri-docker service (if available) ...
	I0605 17:31:29.850348  408313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 17:31:29.867195  408313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 17:31:29.885141  408313 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 17:31:29.977057  408313 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 17:31:30.113485  408313 docker.go:209] disabling docker service ...
	I0605 17:31:30.113636  408313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 17:31:30.140950  408313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 17:31:30.156959  408313 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 17:31:30.259855  408313 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 17:31:30.366361  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 17:31:30.380895  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 17:31:30.401514  408313 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0605 17:31:30.401579  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.413859  408313 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 17:31:30.413926  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.426202  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.438818  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.451649  408313 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 17:31:30.463281  408313 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 17:31:30.475020  408313 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 17:31:30.485702  408313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 17:31:30.577741  408313 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 17:31:30.707083  408313 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 17:31:30.707227  408313 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 17:31:30.712163  408313 start.go:549] Will wait 60s for crictl version
	I0605 17:31:30.712270  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:31:30.716864  408313 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 17:31:30.764695  408313 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 17:31:30.764848  408313 ssh_runner.go:195] Run: crio --version
	I0605 17:31:30.808420  408313 ssh_runner.go:195] Run: crio --version
	I0605 17:31:30.859133  408313 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0605 17:31:30.861500  408313 cli_runner.go:164] Run: docker network inspect addons-735995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:31:30.880649  408313 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0605 17:31:30.885643  408313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:31:30.900519  408313 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:31:30.900595  408313 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:31:30.968016  408313 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 17:31:30.968041  408313 crio.go:415] Images already preloaded, skipping extraction
	I0605 17:31:30.968097  408313 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:31:31.011955  408313 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 17:31:31.011975  408313 cache_images.go:84] Images are preloaded, skipping loading
	I0605 17:31:31.012050  408313 ssh_runner.go:195] Run: crio config
	I0605 17:31:31.070997  408313 cni.go:84] Creating CNI manager for ""
	I0605 17:31:31.071019  408313 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:31:31.071029  408313 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 17:31:31.071078  408313 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-735995 NodeName:addons-735995 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 17:31:31.071265  408313 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-735995"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 17:31:31.071364  408313 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-735995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 17:31:31.071490  408313 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0605 17:31:31.084446  408313 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 17:31:31.084548  408313 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 17:31:31.096019  408313 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0605 17:31:31.118976  408313 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 17:31:31.142228  408313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0605 17:31:31.164643  408313 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0605 17:31:31.169694  408313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:31:31.183684  408313 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995 for IP: 192.168.49.2
	I0605 17:31:31.183715  408313 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.184373  408313 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 17:31:31.530650  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt ...
	I0605 17:31:31.530681  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt: {Name:mkf49f4d39ebeac83c30991cc1274d93bb2ecfd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.530877  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key ...
	I0605 17:31:31.530890  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key: {Name:mk1b94a487155252cc57cad80ff80c092402ff2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.531572  408313 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 17:31:31.836626  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt ...
	I0605 17:31:31.836656  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt: {Name:mk4460f0a8ac3fe54bd8e18f0dd4ba041104b31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.836859  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key ...
	I0605 17:31:31.836873  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key: {Name:mkdd992c9bdc4ae6fcee640dafcd67541c1b69de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.837001  408313 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.key
	I0605 17:31:31.837019  408313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt with IP's: []
	I0605 17:31:32.526900  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt ...
	I0605 17:31:32.526929  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: {Name:mk7ddd7bc5b092db3126d3aab300b4f0c0cef595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.527121  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.key ...
	I0605 17:31:32.527133  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.key: {Name:mk566ca60473fb7fbdadb54c09d85de4da3cf711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.527213  408313 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2
	I0605 17:31:32.527233  408313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0605 17:31:32.760633  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2 ...
	I0605 17:31:32.760666  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2: {Name:mkf1dea16d5fc1ea558696eaeb602a863a0d36b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.760847  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2 ...
	I0605 17:31:32.760860  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2: {Name:mk0fc779103cc1c6963f333ce8367339ae39a20b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.760940  408313 certs.go:337] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt
	I0605 17:31:32.761009  408313 certs.go:341] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key
	I0605 17:31:32.761059  408313 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key
	I0605 17:31:32.761072  408313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt with IP's: []
	I0605 17:31:33.761138  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt ...
	I0605 17:31:33.761172  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt: {Name:mkdeaad2a3e905d8816cd9150953f41baa4017a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:33.761451  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key ...
	I0605 17:31:33.761466  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key: {Name:mk1d9aba57a07148941aee25cfc5e392e01e2538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:33.761688  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 17:31:33.761737  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 17:31:33.761764  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 17:31:33.761795  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 17:31:33.762532  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 17:31:33.792020  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0605 17:31:33.821464  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 17:31:33.851748  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0605 17:31:33.881355  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 17:31:33.910906  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 17:31:33.940541  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 17:31:33.970119  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 17:31:34.000785  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 17:31:34.030853  408313 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 17:31:34.052972  408313 ssh_runner.go:195] Run: openssl version
	I0605 17:31:34.060350  408313 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 17:31:34.072424  408313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:31:34.078349  408313 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:31:34.078427  408313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:31:34.087545  408313 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 17:31:34.099851  408313 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 17:31:34.104596  408313 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:31:34.104645  408313 kubeadm.go:404] StartCluster: {Name:addons-735995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:31:34.104739  408313 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 17:31:34.104808  408313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 17:31:34.148028  408313 cri.go:88] found id: ""
	I0605 17:31:34.148101  408313 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0605 17:31:34.159154  408313 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0605 17:31:34.170184  408313 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0605 17:31:34.170248  408313 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0605 17:31:34.181346  408313 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0605 17:31:34.181415  408313 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0605 17:31:34.237284  408313 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0605 17:31:34.237542  408313 kubeadm.go:322] [preflight] Running pre-flight checks
	I0605 17:31:34.285270  408313 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0605 17:31:34.285392  408313 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0605 17:31:34.285452  408313 kubeadm.go:322] OS: Linux
	I0605 17:31:34.285534  408313 kubeadm.go:322] CGROUPS_CPU: enabled
	I0605 17:31:34.285619  408313 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0605 17:31:34.285701  408313 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0605 17:31:34.285781  408313 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0605 17:31:34.285848  408313 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0605 17:31:34.285938  408313 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0605 17:31:34.286012  408313 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0605 17:31:34.286084  408313 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0605 17:31:34.286160  408313 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0605 17:31:34.365365  408313 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0605 17:31:34.365542  408313 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0605 17:31:34.365670  408313 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0605 17:31:34.618552  408313 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0605 17:31:34.620879  408313 out.go:204]   - Generating certificates and keys ...
	I0605 17:31:34.621110  408313 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0605 17:31:34.621224  408313 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0605 17:31:34.892810  408313 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0605 17:31:35.470339  408313 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0605 17:31:35.757889  408313 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0605 17:31:36.284435  408313 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0605 17:31:37.117622  408313 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0605 17:31:37.118020  408313 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-735995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 17:31:37.660850  408313 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0605 17:31:37.661149  408313 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-735995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 17:31:38.104541  408313 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0605 17:31:38.369416  408313 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0605 17:31:39.005320  408313 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0605 17:31:39.005388  408313 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0605 17:31:39.364374  408313 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0605 17:31:39.625107  408313 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0605 17:31:40.003056  408313 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0605 17:31:40.278570  408313 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0605 17:31:40.289884  408313 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 17:31:40.291596  408313 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 17:31:40.291651  408313 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0605 17:31:40.406773  408313 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0605 17:31:40.409129  408313 out.go:204]   - Booting up control plane ...
	I0605 17:31:40.409251  408313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0605 17:31:40.410794  408313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0605 17:31:40.411864  408313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0605 17:31:40.413012  408313 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0605 17:31:40.416391  408313 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0605 17:31:49.420046  408313 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002687 seconds
	I0605 17:31:49.420161  408313 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0605 17:31:49.438114  408313 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0605 17:31:49.964100  408313 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0605 17:31:49.964320  408313 kubeadm.go:322] [mark-control-plane] Marking the node addons-735995 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0605 17:31:50.476969  408313 kubeadm.go:322] [bootstrap-token] Using token: xd6dl9.7f38bvf10mlyqyhb
	I0605 17:31:50.478682  408313 out.go:204]   - Configuring RBAC rules ...
	I0605 17:31:50.478800  408313 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0605 17:31:50.485673  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0605 17:31:50.495111  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0605 17:31:50.498302  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0605 17:31:50.501809  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0605 17:31:50.505752  408313 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0605 17:31:50.519344  408313 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0605 17:31:50.756554  408313 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0605 17:31:50.905287  408313 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0605 17:31:50.906279  408313 kubeadm.go:322] 
	I0605 17:31:50.906351  408313 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0605 17:31:50.906358  408313 kubeadm.go:322] 
	I0605 17:31:50.906430  408313 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0605 17:31:50.906434  408313 kubeadm.go:322] 
	I0605 17:31:50.906458  408313 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0605 17:31:50.906520  408313 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0605 17:31:50.906568  408313 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0605 17:31:50.906573  408313 kubeadm.go:322] 
	I0605 17:31:50.906623  408313 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0605 17:31:50.906627  408313 kubeadm.go:322] 
	I0605 17:31:50.906672  408313 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0605 17:31:50.906678  408313 kubeadm.go:322] 
	I0605 17:31:50.906727  408313 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0605 17:31:50.906797  408313 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0605 17:31:50.906861  408313 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0605 17:31:50.906866  408313 kubeadm.go:322] 
	I0605 17:31:50.906944  408313 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0605 17:31:50.907017  408313 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0605 17:31:50.907021  408313 kubeadm.go:322] 
	I0605 17:31:50.907100  408313 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xd6dl9.7f38bvf10mlyqyhb \
	I0605 17:31:50.907197  408313 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 \
	I0605 17:31:50.907217  408313 kubeadm.go:322] 	--control-plane 
	I0605 17:31:50.907221  408313 kubeadm.go:322] 
	I0605 17:31:50.907301  408313 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0605 17:31:50.907306  408313 kubeadm.go:322] 
	I0605 17:31:50.907382  408313 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xd6dl9.7f38bvf10mlyqyhb \
	I0605 17:31:50.907478  408313 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 
	I0605 17:31:50.909496  408313 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0605 17:31:50.909698  408313 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 17:31:50.909927  408313 kubeadm.go:322] W0605 17:31:34.365252    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:31:50.910153  408313 kubeadm.go:322] W0605 17:31:40.413151    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:31:50.910163  408313 cni.go:84] Creating CNI manager for ""
	I0605 17:31:50.910171  408313 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:31:50.914313  408313 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0605 17:31:50.916493  408313 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0605 17:31:50.943229  408313 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0605 17:31:50.943254  408313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0605 17:31:50.998449  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0605 17:31:51.929427  408313 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0605 17:31:51.929580  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:51.929667  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d minikube.k8s.io/name=addons-735995 minikube.k8s.io/updated_at=2023_06_05T17_31_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:52.137495  408313 ops.go:34] apiserver oom_adj: -16
	I0605 17:31:52.137603  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:52.741871  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:53.242210  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:53.741263  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:54.241925  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:54.741621  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:55.242212  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:55.741393  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:56.242105  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:56.742232  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:57.241428  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:57.741297  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:58.242006  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:58.742239  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:59.241716  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:59.741935  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:00.242034  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:00.741215  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:01.242198  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:01.742144  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:02.241299  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:02.741856  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:03.241282  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:03.741286  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:03.906705  408313 kubeadm.go:1076] duration metric: took 11.977184369s to wait for elevateKubeSystemPrivileges.
	I0605 17:32:03.906735  408313 kubeadm.go:406] StartCluster complete in 29.802094659s
	I0605 17:32:03.906750  408313 settings.go:142] acquiring lock: {Name:mk7ddedb44759cc39266e9c612309013659bd7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:32:03.908158  408313 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:32:03.908575  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/kubeconfig: {Name:mkb77de9bf1ac5a664886fbfefd28a762472c016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:32:03.908811  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0605 17:32:03.909133  408313 config.go:182] Loaded profile config "addons-735995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:32:03.909233  408313 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0605 17:32:03.909309  408313 addons.go:66] Setting volumesnapshots=true in profile "addons-735995"
	I0605 17:32:03.909325  408313 addons.go:228] Setting addon volumesnapshots=true in "addons-735995"
	I0605 17:32:03.909380  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.909817  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.911519  408313 addons.go:66] Setting ingress=true in profile "addons-735995"
	I0605 17:32:03.911548  408313 addons.go:228] Setting addon ingress=true in "addons-735995"
	I0605 17:32:03.911604  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.912107  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.912196  408313 addons.go:66] Setting cloud-spanner=true in profile "addons-735995"
	I0605 17:32:03.912212  408313 addons.go:228] Setting addon cloud-spanner=true in "addons-735995"
	I0605 17:32:03.912246  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.912640  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.912724  408313 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-735995"
	I0605 17:32:03.912755  408313 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-735995"
	I0605 17:32:03.912794  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.913171  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.913249  408313 addons.go:66] Setting default-storageclass=true in profile "addons-735995"
	I0605 17:32:03.913267  408313 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-735995"
	I0605 17:32:03.913485  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.913547  408313 addons.go:66] Setting gcp-auth=true in profile "addons-735995"
	I0605 17:32:03.913564  408313 mustload.go:65] Loading cluster: addons-735995
	I0605 17:32:03.913717  408313 config.go:182] Loaded profile config "addons-735995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:32:03.913926  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.913996  408313 addons.go:66] Setting metrics-server=true in profile "addons-735995"
	I0605 17:32:03.914011  408313 addons.go:228] Setting addon metrics-server=true in "addons-735995"
	I0605 17:32:03.914041  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.914453  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.914528  408313 addons.go:66] Setting ingress-dns=true in profile "addons-735995"
	I0605 17:32:03.914544  408313 addons.go:228] Setting addon ingress-dns=true in "addons-735995"
	I0605 17:32:03.914578  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.914941  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.915006  408313 addons.go:66] Setting inspektor-gadget=true in profile "addons-735995"
	I0605 17:32:03.915021  408313 addons.go:228] Setting addon inspektor-gadget=true in "addons-735995"
	I0605 17:32:03.915045  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.915390  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.915456  408313 addons.go:66] Setting registry=true in profile "addons-735995"
	I0605 17:32:03.915471  408313 addons.go:228] Setting addon registry=true in "addons-735995"
	I0605 17:32:03.915495  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.919375  408313 addons.go:66] Setting storage-provisioner=true in profile "addons-735995"
	I0605 17:32:03.919404  408313 addons.go:228] Setting addon storage-provisioner=true in "addons-735995"
	I0605 17:32:03.919445  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.919872  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.936059  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:04.027856  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0605 17:32:04.049181  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0605 17:32:04.055790  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0605 17:32:04.063265  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0605 17:32:04.079911  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0605 17:32:04.079864  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0605 17:32:04.093984  408313 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0605 17:32:04.096091  408313 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0605 17:32:04.096112  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0605 17:32:04.096177  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.107494  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0605 17:32:04.092808  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.0
	I0605 17:32:04.114998  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0605 17:32:04.115034  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0605 17:32:04.115105  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.133519  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0605 17:32:04.138972  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0605 17:32:04.143511  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0605 17:32:04.147436  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0605 17:32:04.162428  408313 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0605 17:32:04.162459  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0605 17:32:04.162519  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.168225  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:04.180057  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0605 17:32:04.180082  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0605 17:32:04.180153  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.180712  408313 addons.go:228] Setting addon default-storageclass=true in "addons-735995"
	I0605 17:32:04.180745  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:04.181156  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:04.211167  408313 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0605 17:32:04.215202  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0605 17:32:04.215228  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0605 17:32:04.215297  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.233593  408313 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0605 17:32:04.238297  408313 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0605 17:32:04.238329  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0605 17:32:04.238398  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.249438  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0605 17:32:04.267270  408313 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0605 17:32:04.270076  408313 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0605 17:32:04.270131  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0605 17:32:04.270213  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.274550  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.334813  408313 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:32:04.338474  408313 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:32:04.338505  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0605 17:32:04.338598  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.377726  408313 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0605 17:32:04.377753  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0605 17:32:04.378075  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.428844  408313 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0605 17:32:04.437420  408313 out.go:177]   - Using image docker.io/registry:2.8.1
	I0605 17:32:04.436055  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.440727  408313 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0605 17:32:04.440750  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0605 17:32:04.440812  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.457001  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.460715  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.462319  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.496024  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.516335  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.533667  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.543458  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.559669  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.735254  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0605 17:32:04.744278  408313 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-735995" context rescaled to 1 replicas
	I0605 17:32:04.744364  408313 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:32:04.748002  408313 out.go:177] * Verifying Kubernetes components...
	I0605 17:32:04.750502  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:32:04.831308  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0605 17:32:04.859820  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0605 17:32:04.880565  408313 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0605 17:32:04.880591  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0605 17:32:04.933172  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0605 17:32:04.948889  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0605 17:32:04.948957  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0605 17:32:04.971801  408313 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0605 17:32:04.971826  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0605 17:32:04.978838  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:32:04.994189  408313 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0605 17:32:04.994213  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0605 17:32:05.016343  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0605 17:32:05.016372  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0605 17:32:05.066341  408313 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0605 17:32:05.066368  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0605 17:32:05.117912  408313 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0605 17:32:05.117939  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0605 17:32:05.135792  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0605 17:32:05.135819  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0605 17:32:05.218295  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0605 17:32:05.218320  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0605 17:32:05.218552  408313 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0605 17:32:05.218568  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0605 17:32:05.243902  408313 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0605 17:32:05.243940  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0605 17:32:05.329533  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0605 17:32:05.329560  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0605 17:32:05.354598  408313 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0605 17:32:05.354623  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0605 17:32:05.402934  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0605 17:32:05.402956  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0605 17:32:05.412311  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0605 17:32:05.439117  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0605 17:32:05.439182  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0605 17:32:05.466586  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0605 17:32:05.466647  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0605 17:32:05.563834  408313 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0605 17:32:05.563899  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0605 17:32:05.567079  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0605 17:32:05.572356  408313 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 17:32:05.572426  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0605 17:32:05.648593  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0605 17:32:05.648667  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0605 17:32:05.732460  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 17:32:05.750637  408313 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0605 17:32:05.750717  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0605 17:32:05.813022  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0605 17:32:05.813101  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0605 17:32:05.966917  408313 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.717436186s)
	I0605 17:32:05.966946  408313 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0605 17:32:05.973448  408313 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0605 17:32:05.973472  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0605 17:32:06.034893  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0605 17:32:06.034917  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0605 17:32:06.143154  408313 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0605 17:32:06.143183  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0605 17:32:06.206081  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0605 17:32:06.206107  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0605 17:32:06.303708  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0605 17:32:06.363513  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0605 17:32:06.363540  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0605 17:32:06.571218  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0605 17:32:06.571243  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0605 17:32:06.710020  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0605 17:32:07.670579  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.935288991s)
	I0605 17:32:07.670629  408313 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.92005997s)
	I0605 17:32:07.671460  408313 node_ready.go:35] waiting up to 6m0s for node "addons-735995" to be "Ready" ...
	I0605 17:32:08.260141  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.428796516s)
	I0605 17:32:09.573115  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.639863164s)
	I0605 17:32:09.573206  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.5943099s)
	I0605 17:32:09.573246  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.160911039s)
	I0605 17:32:09.573260  408313 addons.go:464] Verifying addon registry=true in "addons-735995"
	I0605 17:32:09.575282  408313 out.go:177] * Verifying registry addon...
	I0605 17:32:09.573388  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.713543443s)
	I0605 17:32:09.573522  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.00638233s)
	I0605 17:32:09.573612  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.841069715s)
	I0605 17:32:09.573685  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.269927502s)
	I0605 17:32:09.577390  408313 addons.go:464] Verifying addon metrics-server=true in "addons-735995"
	I0605 17:32:09.577410  408313 addons.go:464] Verifying addon ingress=true in "addons-735995"
	I0605 17:32:09.580183  408313 out.go:177] * Verifying ingress addon...
	I0605 17:32:09.578392  408313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0605 17:32:09.578425  408313 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0605 17:32:09.583011  408313 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0605 17:32:09.580241  408313 retry.go:31] will retry after 285.689393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0605 17:32:09.590259  408313 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0605 17:32:09.590287  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:09.594777  408313 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0605 17:32:09.594800  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:09.778540  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:09.869132  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 17:32:09.884212  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.174140985s)
	I0605 17:32:09.884246  408313 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-735995"
	I0605 17:32:09.888051  408313 out.go:177] * Verifying csi-hostpath-driver addon...
	I0605 17:32:09.890952  408313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0605 17:32:09.915654  408313 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0605 17:32:09.915675  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:10.100007  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:10.108324  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:10.422568  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:10.594648  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:10.603624  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:10.929057  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:11.059311  408313 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0605 17:32:11.059402  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:11.094563  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:11.117108  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:11.117370  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:11.304808  408313 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0605 17:32:11.367083  408313 addons.go:228] Setting addon gcp-auth=true in "addons-735995"
	I0605 17:32:11.367130  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:11.367575  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:11.394752  408313 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0605 17:32:11.394810  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:11.422843  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:11.457028  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:11.638009  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:11.644199  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:11.725380  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.856186879s)
	I0605 17:32:11.727997  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0605 17:32:11.729921  408313 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0605 17:32:11.732180  408313 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0605 17:32:11.732230  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0605 17:32:11.791008  408313 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0605 17:32:11.791075  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0605 17:32:11.859298  408313 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0605 17:32:11.859376  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0605 17:32:11.922672  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:11.926154  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0605 17:32:12.095389  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:12.100423  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:12.272919  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:12.421414  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:12.596891  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:12.600315  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:12.942020  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:13.108729  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:13.135275  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:13.375991  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.449750988s)
	I0605 17:32:13.378267  408313 addons.go:464] Verifying addon gcp-auth=true in "addons-735995"
	I0605 17:32:13.381429  408313 out.go:177] * Verifying gcp-auth addon...
	I0605 17:32:13.384159  408313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0605 17:32:13.403172  408313 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0605 17:32:13.403245  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:13.447281  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:13.595826  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:13.603990  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:13.907906  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:13.921391  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:14.095863  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:14.100834  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:14.412232  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:14.422319  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:14.596752  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:14.602015  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:14.773721  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:14.907911  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:14.921481  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:15.101374  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:15.105587  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:15.407217  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:15.424830  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:15.595515  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:15.599904  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:15.908424  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:15.931994  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:16.102682  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:16.103610  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:16.407762  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:16.421763  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:16.595667  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:16.600516  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:16.773901  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:16.908919  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:16.925694  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:17.095443  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:17.104332  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:17.408655  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:17.426973  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:17.599893  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:17.605408  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:17.908116  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:17.924415  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:18.095940  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:18.101563  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:18.407317  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:18.421796  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:18.595371  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:18.600905  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:18.908255  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:18.921539  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:19.095057  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:19.101530  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:19.273290  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:19.406959  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:19.421968  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:19.598218  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:19.601533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:19.909392  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:19.923256  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:20.095066  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:20.099354  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:20.407770  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:20.424335  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:20.603184  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:20.615171  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:20.910864  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:20.922563  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:21.095497  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:21.102499  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:21.278213  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:21.408773  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:21.424425  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:21.600538  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:21.603808  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:21.907328  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:21.921252  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:22.096881  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:22.099886  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:22.408473  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:22.424966  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:22.595345  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:22.599893  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:22.907889  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:22.920338  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:23.095368  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:23.100216  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:23.407251  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:23.421464  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:23.595720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:23.600436  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:23.772741  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:23.908858  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:23.928713  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:24.103214  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:24.106217  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:24.408451  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:24.425627  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:24.596696  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:24.601132  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:24.907462  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:24.921327  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:25.095841  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:25.099651  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:25.407420  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:25.420582  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:25.594479  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:25.598960  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:25.907692  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:25.920642  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:26.095317  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:26.099151  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:26.272929  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:26.407235  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:26.420902  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:26.597184  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:26.599663  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:26.907287  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:26.920151  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:27.094720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:27.099530  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:27.406971  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:27.420730  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:27.594168  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:27.598674  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:27.906795  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:27.920194  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:28.094418  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:28.098904  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:28.407594  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:28.420448  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:28.595847  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:28.599784  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:28.772233  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:28.907271  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:28.920128  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:29.094812  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:29.098533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:29.407218  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:29.420264  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:29.594976  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:29.599074  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:29.909507  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:29.922079  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:30.095585  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:30.100985  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:30.410817  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:30.420802  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:30.595451  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:30.598915  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:30.772764  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:30.907448  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:30.920720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:31.095044  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:31.099752  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:31.406645  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:31.420627  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:31.595000  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:31.598865  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:31.907911  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:31.920582  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:32.094882  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:32.098504  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:32.406638  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:32.419720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:32.594704  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:32.599451  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:32.773058  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:32.908471  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:32.920788  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:33.094570  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:33.099519  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:33.406964  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:33.420995  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:33.594649  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:33.599306  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:33.907745  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:33.920342  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:34.095066  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:34.099197  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:34.407312  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:34.420147  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:34.608632  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:34.612797  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:34.924142  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:34.954945  408313 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0605 17:32:34.955010  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:35.152004  408313 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0605 17:32:35.152120  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:35.153518  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:35.316700  408313 node_ready.go:49] node "addons-735995" has status "Ready":"True"
	I0605 17:32:35.316770  408313 node_ready.go:38] duration metric: took 27.645275528s waiting for node "addons-735995" to be "Ready" ...
	I0605 17:32:35.316794  408313 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:32:35.388074  408313 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-l5bkd" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:35.423231  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:35.433049  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:35.638224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:35.638495  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:35.908367  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:35.923094  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:36.096145  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:36.100029  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:36.409771  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:36.422324  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:36.451045  408313 pod_ready.go:92] pod "coredns-5d78c9869d-l5bkd" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.451071  408313 pod_ready.go:81] duration metric: took 1.06293025s waiting for pod "coredns-5d78c9869d-l5bkd" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.451094  408313 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.457691  408313 pod_ready.go:92] pod "etcd-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.457737  408313 pod_ready.go:81] duration metric: took 6.633631ms waiting for pod "etcd-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.457753  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.464459  408313 pod_ready.go:92] pod "kube-apiserver-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.464495  408313 pod_ready.go:81] duration metric: took 6.728433ms waiting for pod "kube-apiserver-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.464510  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.471169  408313 pod_ready.go:92] pod "kube-controller-manager-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.471195  408313 pod_ready.go:81] duration metric: took 6.668561ms waiting for pod "kube-controller-manager-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.471210  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cvrjb" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.479648  408313 pod_ready.go:92] pod "kube-proxy-cvrjb" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.479678  408313 pod_ready.go:81] duration metric: took 8.459096ms waiting for pod "kube-proxy-cvrjb" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.479690  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.595635  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:36.601138  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:36.876196  408313 pod_ready.go:92] pod "kube-scheduler-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.876222  408313 pod_ready.go:81] duration metric: took 396.523416ms waiting for pod "kube-scheduler-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.876234  408313 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.907288  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:36.922809  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:37.095720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:37.099903  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:37.407523  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:37.425372  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:37.599428  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:37.609182  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:37.908212  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:37.923165  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:38.094942  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:38.099224  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:38.407704  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:38.421619  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:38.596223  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:38.599413  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:38.907190  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:38.921925  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:39.095387  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:39.101035  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:39.288853  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:39.409467  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:39.427430  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:39.598540  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:39.600903  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:39.908502  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:39.926854  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:40.106845  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:40.111152  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:40.408316  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:40.423344  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:40.599261  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:40.606480  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:40.908616  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:40.936753  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:41.099495  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:41.103387  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:41.289763  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:41.407978  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:41.424349  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:41.595910  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:41.601500  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:41.907758  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:41.950984  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:42.107486  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:42.108165  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:42.407353  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:42.427903  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:42.605224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:42.617242  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:42.908709  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:42.922913  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:43.106256  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:43.107762  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:43.408476  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:43.426102  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:43.595363  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:43.602746  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:43.781648  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:43.907757  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:43.921900  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:44.096362  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:44.101970  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:44.408523  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:44.422468  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:44.607553  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:44.608949  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:44.907432  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:44.922316  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:45.100016  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:45.102245  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:45.409035  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:45.422191  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:45.596302  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:45.599725  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:45.782197  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:45.907199  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:45.922403  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:46.097423  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:46.102621  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:46.408150  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:46.422399  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:46.595955  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:46.600668  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:46.907148  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:46.921632  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:47.095833  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:47.099385  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:47.407320  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:47.421675  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:47.595866  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:47.600054  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:47.907704  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:47.940955  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:48.095368  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:48.099744  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:48.283880  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:48.408737  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:48.423501  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:48.595106  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:48.599026  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:48.907910  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:48.922257  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:49.096232  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:49.100386  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:49.407452  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:49.425749  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:49.599162  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:49.605140  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:49.919827  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:49.927839  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:50.096296  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:50.102646  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:50.408891  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:50.421875  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:50.595717  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:50.598968  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:50.780736  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:50.907224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:50.922179  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:51.097213  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:51.102504  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:51.407167  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:51.423339  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:51.598986  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:51.602205  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:51.912814  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:51.925975  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:52.096320  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:52.101803  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:52.435689  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:52.439340  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:52.597868  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:52.606277  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:52.789157  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:52.906924  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:52.921738  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:53.096025  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:53.100717  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:53.406967  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:53.422385  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:53.595572  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:53.600540  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:53.907190  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:53.922062  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:54.116952  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:54.117863  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:54.408256  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:54.430969  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:54.598198  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:54.607487  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:54.910192  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:54.922306  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:55.097039  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:55.101912  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:55.281009  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:55.408266  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:55.423306  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:55.597112  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:55.602571  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:55.907695  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:55.921909  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:56.096533  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:56.103639  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:56.415548  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:56.424490  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:56.608574  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:56.612889  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:56.908097  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:56.924258  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:57.096770  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:57.102078  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:57.283650  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:57.407574  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:57.422879  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:57.597895  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:57.602032  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:57.931613  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:57.954925  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:58.097718  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:58.105745  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:58.410496  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:58.430333  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:58.599381  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:58.619806  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:58.907224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:58.927558  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:59.103084  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:59.113194  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:59.408138  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:59.423867  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:59.616486  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:59.616952  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:59.784359  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:59.910185  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:59.923690  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:00.133311  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:00.158375  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:00.413822  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:00.423220  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:00.595309  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:00.599633  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:00.907040  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:00.921862  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:01.095728  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:01.100562  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:01.409214  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:01.422392  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:01.596026  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:01.599654  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:01.907555  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:01.928422  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:02.100801  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:02.101692  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:02.291392  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:33:02.407374  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:02.421373  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:02.596015  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:02.599084  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:02.911740  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:02.923961  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:03.096184  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:03.101025  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:03.407982  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:03.424471  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:03.608441  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:03.608988  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:03.908070  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:03.921213  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:04.114021  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:04.114947  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:04.407296  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:04.422128  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:04.597386  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:04.600602  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:04.781420  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:33:04.909372  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:04.924286  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:05.095576  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:05.099825  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:05.407233  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:05.422900  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:05.696713  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:05.703963  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:05.782951  408313 pod_ready.go:92] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"True"
	I0605 17:33:05.783040  408313 pod_ready.go:81] duration metric: took 28.906795415s waiting for pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace to be "Ready" ...
	I0605 17:33:05.783110  408313 pod_ready.go:38] duration metric: took 30.466261695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:33:05.783182  408313 api_server.go:52] waiting for apiserver process to appear ...
	I0605 17:33:05.783252  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 17:33:05.783388  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 17:33:05.857947  408313 cri.go:88] found id: "fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:05.857971  408313 cri.go:88] found id: ""
	I0605 17:33:05.857979  408313 logs.go:284] 1 containers: [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4]
	I0605 17:33:05.858034  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:05.864216  408313 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 17:33:05.864289  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 17:33:05.911877  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:05.924110  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:05.959001  408313 cri.go:88] found id: "6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:05.959023  408313 cri.go:88] found id: ""
	I0605 17:33:05.959030  408313 logs.go:284] 1 containers: [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576]
	I0605 17:33:05.959089  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:05.967513  408313 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 17:33:05.967581  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 17:33:06.064292  408313 cri.go:88] found id: "508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:06.064313  408313 cri.go:88] found id: ""
	I0605 17:33:06.064321  408313 logs.go:284] 1 containers: [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e]
	I0605 17:33:06.064384  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.070139  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 17:33:06.070223  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 17:33:06.096580  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:06.101763  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:06.151182  408313 cri.go:88] found id: "afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:06.151208  408313 cri.go:88] found id: ""
	I0605 17:33:06.151216  408313 logs.go:284] 1 containers: [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719]
	I0605 17:33:06.151274  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.164593  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 17:33:06.164667  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 17:33:06.255481  408313 cri.go:88] found id: "5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:06.255506  408313 cri.go:88] found id: ""
	I0605 17:33:06.255515  408313 logs.go:284] 1 containers: [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098]
	I0605 17:33:06.255573  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.261760  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 17:33:06.261842  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 17:33:06.325892  408313 cri.go:88] found id: "07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:06.325916  408313 cri.go:88] found id: ""
	I0605 17:33:06.325924  408313 logs.go:284] 1 containers: [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a]
	I0605 17:33:06.325986  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.331847  408313 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 17:33:06.331932  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 17:33:06.391508  408313 cri.go:88] found id: "9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:06.391532  408313 cri.go:88] found id: ""
	I0605 17:33:06.391540  408313 logs.go:284] 1 containers: [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c]
	I0605 17:33:06.391608  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.397823  408313 logs.go:123] Gathering logs for dmesg ...
	I0605 17:33:06.397850  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 17:33:06.408217  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:06.422742  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:06.434206  408313 logs.go:123] Gathering logs for kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] ...
	I0605 17:33:06.434235  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:06.504496  408313 logs.go:123] Gathering logs for kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] ...
	I0605 17:33:06.504535  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:06.611938  408313 logs.go:123] Gathering logs for kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] ...
	I0605 17:33:06.611964  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:06.622262  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:06.623613  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:06.726420  408313 logs.go:123] Gathering logs for kubelet ...
	I0605 17:33:06.726498  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 17:33:06.839638  408313 logs.go:123] Gathering logs for describe nodes ...
	I0605 17:33:06.839716  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0605 17:33:06.921432  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:06.925458  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:07.118944  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:07.120401  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:07.131091  408313 logs.go:123] Gathering logs for kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] ...
	I0605 17:33:07.131165  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:07.193090  408313 logs.go:123] Gathering logs for etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] ...
	I0605 17:33:07.193124  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:07.253207  408313 logs.go:123] Gathering logs for coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] ...
	I0605 17:33:07.253242  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:07.301768  408313 logs.go:123] Gathering logs for kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] ...
	I0605 17:33:07.301799  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:07.349948  408313 logs.go:123] Gathering logs for CRI-O ...
	I0605 17:33:07.349975  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 17:33:07.407453  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:07.437783  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:07.442783  408313 logs.go:123] Gathering logs for container status ...
	I0605 17:33:07.442812  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 17:33:07.596200  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:07.599692  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:07.907616  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:07.921733  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:08.095161  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:08.099259  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:08.407427  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:08.424727  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:08.596190  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:08.600808  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:08.908874  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:08.926153  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:09.102385  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:09.108464  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:09.415140  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:09.424777  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:09.598054  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:09.608497  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:09.911617  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:09.923503  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:10.005907  408313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 17:33:10.030437  408313 api_server.go:72] duration metric: took 1m5.286025078s to wait for apiserver process to appear ...
	I0605 17:33:10.030513  408313 api_server.go:88] waiting for apiserver healthz status ...
	I0605 17:33:10.030561  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 17:33:10.030649  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 17:33:10.126626  408313 cri.go:88] found id: "fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:10.126732  408313 cri.go:88] found id: ""
	I0605 17:33:10.126754  408313 logs.go:284] 1 containers: [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4]
	I0605 17:33:10.126858  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.130113  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:10.133464  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:10.144318  408313 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 17:33:10.144441  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 17:33:10.240553  408313 cri.go:88] found id: "6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:10.240621  408313 cri.go:88] found id: ""
	I0605 17:33:10.240644  408313 logs.go:284] 1 containers: [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576]
	I0605 17:33:10.240741  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.248610  408313 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 17:33:10.248732  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 17:33:10.320727  408313 cri.go:88] found id: "508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:10.320795  408313 cri.go:88] found id: ""
	I0605 17:33:10.320818  408313 logs.go:284] 1 containers: [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e]
	I0605 17:33:10.320914  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.328965  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 17:33:10.329075  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 17:33:10.410567  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:10.425252  408313 cri.go:88] found id: "afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:10.425277  408313 cri.go:88] found id: ""
	I0605 17:33:10.425286  408313 logs.go:284] 1 containers: [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719]
	I0605 17:33:10.425340  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.431131  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:10.437784  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 17:33:10.437880  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 17:33:10.490921  408313 cri.go:88] found id: "5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:10.490947  408313 cri.go:88] found id: ""
	I0605 17:33:10.490956  408313 logs.go:284] 1 containers: [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098]
	I0605 17:33:10.491009  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.496345  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 17:33:10.496421  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 17:33:10.575552  408313 cri.go:88] found id: "07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:10.575624  408313 cri.go:88] found id: ""
	I0605 17:33:10.575647  408313 logs.go:284] 1 containers: [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a]
	I0605 17:33:10.575764  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.586747  408313 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 17:33:10.586900  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 17:33:10.597271  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:10.608133  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:10.668737  408313 cri.go:88] found id: "9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:10.668812  408313 cri.go:88] found id: ""
	I0605 17:33:10.668842  408313 logs.go:284] 1 containers: [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c]
	I0605 17:33:10.668953  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.675739  408313 logs.go:123] Gathering logs for kubelet ...
	I0605 17:33:10.675822  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 17:33:10.791701  408313 logs.go:123] Gathering logs for kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] ...
	I0605 17:33:10.791776  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:10.907600  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:10.924315  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:10.962455  408313 logs.go:123] Gathering logs for kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] ...
	I0605 17:33:10.962509  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:11.078060  408313 logs.go:123] Gathering logs for kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] ...
	I0605 17:33:11.078165  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:11.103752  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:11.106961  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:11.199528  408313 logs.go:123] Gathering logs for kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] ...
	I0605 17:33:11.199612  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:11.286935  408313 logs.go:123] Gathering logs for container status ...
	I0605 17:33:11.287009  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 17:33:11.407829  408313 logs.go:123] Gathering logs for dmesg ...
	I0605 17:33:11.407860  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 17:33:11.413183  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:11.422671  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:11.459401  408313 logs.go:123] Gathering logs for describe nodes ...
	I0605 17:33:11.459433  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0605 17:33:11.622016  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:11.623440  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:11.807758  408313 logs.go:123] Gathering logs for etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] ...
	I0605 17:33:11.807795  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:11.912632  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:11.916599  408313 logs.go:123] Gathering logs for coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] ...
	I0605 17:33:11.916657  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:11.929985  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:12.005580  408313 logs.go:123] Gathering logs for kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] ...
	I0605 17:33:12.005681  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:12.098310  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:12.105012  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:12.162827  408313 logs.go:123] Gathering logs for CRI-O ...
	I0605 17:33:12.162869  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 17:33:12.411736  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:12.441851  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:12.595839  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:12.600103  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:12.908823  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:12.930258  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:13.097302  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:13.102543  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:13.411491  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:13.421971  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:13.596292  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:13.600850  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:13.912095  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:13.921157  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:14.098446  408313 kapi.go:107] duration metric: took 1m4.520051227s to wait for kubernetes.io/minikube-addons=registry ...
	I0605 17:33:14.102935  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:14.407251  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:14.421510  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:14.599765  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:14.778009  408313 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0605 17:33:14.787208  408313 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0605 17:33:14.788640  408313 api_server.go:141] control plane version: v1.27.2
	I0605 17:33:14.788666  408313 api_server.go:131] duration metric: took 4.758132634s to wait for apiserver health ...
	I0605 17:33:14.788675  408313 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 17:33:14.788697  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 17:33:14.788764  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 17:33:14.836296  408313 cri.go:88] found id: "fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:14.836320  408313 cri.go:88] found id: ""
	I0605 17:33:14.836328  408313 logs.go:284] 1 containers: [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4]
	I0605 17:33:14.836383  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:14.843068  408313 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 17:33:14.843146  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 17:33:14.910072  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:14.921974  408313 cri.go:88] found id: "6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:14.922040  408313 cri.go:88] found id: ""
	I0605 17:33:14.922062  408313 logs.go:284] 1 containers: [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576]
	I0605 17:33:14.922155  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:14.924466  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:14.933871  408313 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 17:33:14.933991  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 17:33:14.998285  408313 cri.go:88] found id: "508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:14.998355  408313 cri.go:88] found id: ""
	I0605 17:33:14.998378  408313 logs.go:284] 1 containers: [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e]
	I0605 17:33:14.998471  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.010750  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 17:33:15.010915  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 17:33:15.073940  408313 cri.go:88] found id: "afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:15.073967  408313 cri.go:88] found id: ""
	I0605 17:33:15.073976  408313 logs.go:284] 1 containers: [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719]
	I0605 17:33:15.074051  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.081778  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 17:33:15.081860  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 17:33:15.104736  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:15.137616  408313 cri.go:88] found id: "5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:15.137693  408313 cri.go:88] found id: ""
	I0605 17:33:15.137715  408313 logs.go:284] 1 containers: [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098]
	I0605 17:33:15.137801  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.143124  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 17:33:15.143246  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 17:33:15.195016  408313 cri.go:88] found id: "07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:15.195087  408313 cri.go:88] found id: ""
	I0605 17:33:15.195108  408313 logs.go:284] 1 containers: [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a]
	I0605 17:33:15.195177  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.200202  408313 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 17:33:15.200311  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 17:33:15.250276  408313 cri.go:88] found id: "9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:15.250350  408313 cri.go:88] found id: ""
	I0605 17:33:15.250372  408313 logs.go:284] 1 containers: [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c]
	I0605 17:33:15.250505  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.255205  408313 logs.go:123] Gathering logs for etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] ...
	I0605 17:33:15.255267  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:15.308041  408313 logs.go:123] Gathering logs for coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] ...
	I0605 17:33:15.308073  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:15.353184  408313 logs.go:123] Gathering logs for kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] ...
	I0605 17:33:15.353214  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:15.414643  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:15.421179  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:15.424187  408313 logs.go:123] Gathering logs for dmesg ...
	I0605 17:33:15.424226  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 17:33:15.454948  408313 logs.go:123] Gathering logs for describe nodes ...
	I0605 17:33:15.455006  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0605 17:33:15.605124  408313 logs.go:123] Gathering logs for kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] ...
	I0605 17:33:15.605158  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:15.610491  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:15.698728  408313 logs.go:123] Gathering logs for kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] ...
	I0605 17:33:15.698767  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:15.773743  408313 logs.go:123] Gathering logs for kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] ...
	I0605 17:33:15.773773  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:15.819420  408313 logs.go:123] Gathering logs for kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] ...
	I0605 17:33:15.819479  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:15.888621  408313 logs.go:123] Gathering logs for CRI-O ...
	I0605 17:33:15.888652  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 17:33:15.908064  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:15.926986  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:16.002389  408313 logs.go:123] Gathering logs for container status ...
	I0605 17:33:16.002438  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 17:33:16.101689  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:16.118266  408313 logs.go:123] Gathering logs for kubelet ...
	I0605 17:33:16.118340  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 17:33:16.409430  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:16.423394  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:16.600614  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:16.907753  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:16.921436  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:17.100848  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:17.407822  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:17.421926  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:17.604654  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:17.909577  408313 kapi.go:107] duration metric: took 1m4.525409852s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0605 17:33:17.913053  408313 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-735995 cluster.
	I0605 17:33:17.915736  408313 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0605 17:33:17.918049  408313 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0605 17:33:17.933820  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:18.099815  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:18.447066  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:18.601533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:18.789526  408313 system_pods.go:59] 17 kube-system pods found
	I0605 17:33:18.789565  408313 system_pods.go:61] "coredns-5d78c9869d-l5bkd" [4f797771-1160-4aee-90d6-6318e79fb0f1] Running
	I0605 17:33:18.789573  408313 system_pods.go:61] "csi-hostpath-attacher-0" [865791b9-c9c5-4006-a914-13a73b32e398] Running
	I0605 17:33:18.789578  408313 system_pods.go:61] "csi-hostpath-resizer-0" [83dcd2f8-2a1e-4450-8d18-5e5a86bda005] Running
	I0605 17:33:18.789589  408313 system_pods.go:61] "csi-hostpathplugin-jsp8k" [17cdb2d4-6cb2-4b5d-b466-8fac66c26119] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0605 17:33:18.789596  408313 system_pods.go:61] "etcd-addons-735995" [9cd50488-e14a-41b5-ab75-bd0e60fb5629] Running
	I0605 17:33:18.789606  408313 system_pods.go:61] "kindnet-n94t6" [636d16c5-d20f-4ce5-9bcd-6785b44e7099] Running
	I0605 17:33:18.789612  408313 system_pods.go:61] "kube-apiserver-addons-735995" [8607dcea-cddf-40ed-9bd2-0f3c8cfb5a93] Running
	I0605 17:33:18.789622  408313 system_pods.go:61] "kube-controller-manager-addons-735995" [7b02cda4-0e3f-4011-b5c7-e2992fea324c] Running
	I0605 17:33:18.789631  408313 system_pods.go:61] "kube-ingress-dns-minikube" [caadaba2-93ce-42a1-8339-fc8d5e28c44a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0605 17:33:18.789641  408313 system_pods.go:61] "kube-proxy-cvrjb" [6339c547-f0d2-473e-8384-9a2a6edb94c1] Running
	I0605 17:33:18.789646  408313 system_pods.go:61] "kube-scheduler-addons-735995" [a17a0f19-922c-4753-b9e3-ace693ab8799] Running
	I0605 17:33:18.789652  408313 system_pods.go:61] "metrics-server-844d8db974-66p4n" [da2b3efb-e47f-430a-b8c7-e9c926140c32] Running
	I0605 17:33:18.789662  408313 system_pods.go:61] "registry-d94xj" [3b4e0792-a45f-41f1-911a-36c1609f1e26] Running
	I0605 17:33:18.789669  408313 system_pods.go:61] "registry-proxy-6c5b7" [542106f4-ef94-45fe-8183-768a7d7b500f] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0605 17:33:18.789680  408313 system_pods.go:61] "snapshot-controller-75bbb956b9-4wct2" [bebcc4f3-f9bf-4eef-ab2b-834954867d13] Running
	I0605 17:33:18.789820  408313 system_pods.go:61] "snapshot-controller-75bbb956b9-x66wp" [8f84c95d-a6f1-4a7e-a332-fd7fb635f8f0] Running
	I0605 17:33:18.789832  408313 system_pods.go:61] "storage-provisioner" [ed27cc6b-2dcd-4877-9e7a-e9064bd85070] Running
	I0605 17:33:18.789838  408313 system_pods.go:74] duration metric: took 4.001157632s to wait for pod list to return data ...
	I0605 17:33:18.789847  408313 default_sa.go:34] waiting for default service account to be created ...
	I0605 17:33:18.794363  408313 default_sa.go:45] found service account: "default"
	I0605 17:33:18.794390  408313 default_sa.go:55] duration metric: took 4.533756ms for default service account to be created ...
	I0605 17:33:18.794414  408313 system_pods.go:116] waiting for k8s-apps to be running ...
	I0605 17:33:18.807542  408313 system_pods.go:86] 17 kube-system pods found
	I0605 17:33:18.807629  408313 system_pods.go:89] "coredns-5d78c9869d-l5bkd" [4f797771-1160-4aee-90d6-6318e79fb0f1] Running
	I0605 17:33:18.807651  408313 system_pods.go:89] "csi-hostpath-attacher-0" [865791b9-c9c5-4006-a914-13a73b32e398] Running
	I0605 17:33:18.807673  408313 system_pods.go:89] "csi-hostpath-resizer-0" [83dcd2f8-2a1e-4450-8d18-5e5a86bda005] Running
	I0605 17:33:18.807709  408313 system_pods.go:89] "csi-hostpathplugin-jsp8k" [17cdb2d4-6cb2-4b5d-b466-8fac66c26119] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0605 17:33:18.807739  408313 system_pods.go:89] "etcd-addons-735995" [9cd50488-e14a-41b5-ab75-bd0e60fb5629] Running
	I0605 17:33:18.807762  408313 system_pods.go:89] "kindnet-n94t6" [636d16c5-d20f-4ce5-9bcd-6785b44e7099] Running
	I0605 17:33:18.807784  408313 system_pods.go:89] "kube-apiserver-addons-735995" [8607dcea-cddf-40ed-9bd2-0f3c8cfb5a93] Running
	I0605 17:33:18.807818  408313 system_pods.go:89] "kube-controller-manager-addons-735995" [7b02cda4-0e3f-4011-b5c7-e2992fea324c] Running
	I0605 17:33:18.807846  408313 system_pods.go:89] "kube-ingress-dns-minikube" [caadaba2-93ce-42a1-8339-fc8d5e28c44a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0605 17:33:18.807866  408313 system_pods.go:89] "kube-proxy-cvrjb" [6339c547-f0d2-473e-8384-9a2a6edb94c1] Running
	I0605 17:33:18.807886  408313 system_pods.go:89] "kube-scheduler-addons-735995" [a17a0f19-922c-4753-b9e3-ace693ab8799] Running
	I0605 17:33:18.807926  408313 system_pods.go:89] "metrics-server-844d8db974-66p4n" [da2b3efb-e47f-430a-b8c7-e9c926140c32] Running
	I0605 17:33:18.807951  408313 system_pods.go:89] "registry-d94xj" [3b4e0792-a45f-41f1-911a-36c1609f1e26] Running
	I0605 17:33:18.807973  408313 system_pods.go:89] "registry-proxy-6c5b7" [542106f4-ef94-45fe-8183-768a7d7b500f] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0605 17:33:18.807993  408313 system_pods.go:89] "snapshot-controller-75bbb956b9-4wct2" [bebcc4f3-f9bf-4eef-ab2b-834954867d13] Running
	I0605 17:33:18.808026  408313 system_pods.go:89] "snapshot-controller-75bbb956b9-x66wp" [8f84c95d-a6f1-4a7e-a332-fd7fb635f8f0] Running
	I0605 17:33:18.808048  408313 system_pods.go:89] "storage-provisioner" [ed27cc6b-2dcd-4877-9e7a-e9064bd85070] Running
	I0605 17:33:18.808069  408313 system_pods.go:126] duration metric: took 13.646331ms to wait for k8s-apps to be running ...
	I0605 17:33:18.808089  408313 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 17:33:18.808174  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:33:18.832843  408313 system_svc.go:56] duration metric: took 24.745066ms WaitForService to wait for kubelet.
	I0605 17:33:18.832911  408313 kubeadm.go:581] duration metric: took 1m14.088505235s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0605 17:33:18.832948  408313 node_conditions.go:102] verifying NodePressure condition ...
	I0605 17:33:18.838814  408313 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 17:33:18.838892  408313 node_conditions.go:123] node cpu capacity is 2
	I0605 17:33:18.838921  408313 node_conditions.go:105] duration metric: took 5.948497ms to run NodePressure ...
	I0605 17:33:18.838945  408313 start.go:228] waiting for startup goroutines ...
	I0605 17:33:18.921833  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:19.103343  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:19.421651  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:19.600490  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:19.922768  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:20.100290  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:20.423871  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:20.609193  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:20.923164  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:21.100546  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:21.422245  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:21.600465  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:21.921451  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:22.100501  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:22.421573  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:22.599691  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:22.924589  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:23.100855  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:23.422513  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:23.599469  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:23.921708  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:24.100559  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:24.427423  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:24.600193  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:24.922436  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:25.102872  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:25.422552  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:25.602566  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:25.924218  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:26.100282  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:26.421456  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:26.601083  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:26.923326  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:27.100639  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:27.427489  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:27.601444  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:27.924112  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:28.100643  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:28.429669  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:28.600286  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:28.921804  408313 kapi.go:107] duration metric: took 1m19.030849226s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0605 17:33:29.099808  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:29.600375  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:30.102074  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:30.600327  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:31.100469  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:31.599173  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:32.100128  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:32.600687  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:33.099687  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:33.599325  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:34.100210  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:34.599408  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:35.099389  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:35.600069  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:36.100039  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:36.602282  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:37.100337  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:37.600744  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:38.099395  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:38.599713  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:39.100884  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:39.600235  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:40.100672  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:40.600533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:41.099494  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:41.599991  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:42.101750  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:42.599254  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:43.099858  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:43.600226  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:44.102189  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:44.600025  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:45.105291  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:45.599213  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:46.101417  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:46.600521  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:47.115604  408313 kapi.go:107] duration metric: took 1m37.532593975s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0605 17:33:47.118885  408313 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0605 17:33:47.122146  408313 addons.go:499] enable addons completed in 1m43.212870954s: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner inspektor-gadget metrics-server volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0605 17:33:47.122260  408313 start.go:233] waiting for cluster config update ...
	I0605 17:33:47.122324  408313 start.go:242] writing updated cluster config ...
	I0605 17:33:47.122737  408313 ssh_runner.go:195] Run: rm -f paused
	I0605 17:33:47.213588  408313 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0605 17:33:47.216456  408313 out.go:177] * Done! kubectl is now configured to use "addons-735995" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 05 17:36:33 addons-735995 crio[892]: time="2023-06-05 17:36:33.499784420Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-p8crq/hello-world-app" id=113f45a2-0348-483c-ab1f-796b0713f141 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 17:36:33 addons-735995 crio[892]: time="2023-06-05 17:36:33.499881971Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 05 17:36:33 addons-735995 crio[892]: time="2023-06-05 17:36:33.608743462Z" level=info msg="Created container 731580839d70d7f54b776dbae38fb88ef3d1f3a00730c1b5eba7c8cf91013673: default/hello-world-app-65bdb79f98-p8crq/hello-world-app" id=113f45a2-0348-483c-ab1f-796b0713f141 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 17:36:33 addons-735995 crio[892]: time="2023-06-05 17:36:33.609874385Z" level=info msg="Starting container: 731580839d70d7f54b776dbae38fb88ef3d1f3a00730c1b5eba7c8cf91013673" id=74369aea-1024-47a3-9d53-03d76ae03885 name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 17:36:33 addons-735995 conmon[6945]: conmon 731580839d70d7f54b77 <ninfo>: container 6956 exited with status 1
	Jun 05 17:36:33 addons-735995 crio[892]: time="2023-06-05 17:36:33.625080482Z" level=info msg="Started container" PID=6956 containerID=731580839d70d7f54b776dbae38fb88ef3d1f3a00730c1b5eba7c8cf91013673 description=default/hello-world-app-65bdb79f98-p8crq/hello-world-app id=74369aea-1024-47a3-9d53-03d76ae03885 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb983610bdce8a2ab35f2502079bfa3d4ead532c943d73cc7bc34fc3b4e587ba
	Jun 05 17:36:34 addons-735995 crio[892]: time="2023-06-05 17:36:34.501005924Z" level=info msg="Removing container: 8578441bd64990703214c4d11b4070f1f2e8017eabb3b34c72590f81bbb088a0" id=c6cea2e4-0004-47cc-964f-766d50444365 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 17:36:34 addons-735995 crio[892]: time="2023-06-05 17:36:34.545402158Z" level=info msg="Removed container 8578441bd64990703214c4d11b4070f1f2e8017eabb3b34c72590f81bbb088a0: default/hello-world-app-65bdb79f98-p8crq/hello-world-app" id=c6cea2e4-0004-47cc-964f-766d50444365 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.562075071Z" level=info msg="Stopping container: df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08 (timeout: 30s)" id=bae6c021-e392-439b-8ce2-52128f69f66d name=/runtime.v1.RuntimeService/StopContainer
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.592386553Z" level=info msg="Stopping pod sandbox: 4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287" id=edb351f5-2cc8-4c12-8d67-f1d2d08aba4d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.608194307Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-OHM2QGLFPONIB2J7 - [0:0]\n:KUBE-HP-JZV4G2W5FN34HSGA - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-OSGAABXVQOPTMUKF - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7b4698b8c7-t9x2c_ingress-nginx_95027045-abda-4215-82f0-6f5046ecf11e_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-JZV4G2W5FN34HSGA\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7b4698b8c7-t9x2c_ingress-nginx_95027045-abda-4215-82f0-6f5046ecf11e_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-OHM2QGLFPONIB2J7\n-A KUBE-HP-JZV4G2W5FN34HSGA -s 10.244.0.17/32 -m comment --comment \"k8s_ingress-nginx-controller-7b4698b8c7-t9x2c_ingress-nginx_95027045-abda-4215-82f0-6f5046ecf11e_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-JZV4G2W5FN34HSGA -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7b4698b8c7-t9x2c_ingress-nginx_95027045-abda-421
5-82f0-6f5046ecf11e_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.17:443\n-A KUBE-HP-OHM2QGLFPONIB2J7 -s 10.244.0.17/32 -m comment --comment \"k8s_ingress-nginx-controller-7b4698b8c7-t9x2c_ingress-nginx_95027045-abda-4215-82f0-6f5046ecf11e_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-OHM2QGLFPONIB2J7 -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7b4698b8c7-t9x2c_ingress-nginx_95027045-abda-4215-82f0-6f5046ecf11e_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.17:80\n-X KUBE-HP-OSGAABXVQOPTMUKF\nCOMMIT\n"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.620152891Z" level=info msg="Closing host port tcp:5000"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.624578916Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.624796976Z" level=info msg="Got pod network &{Name:registry-proxy-6c5b7 Namespace:kube-system ID:4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287 UID:542106f4-ef94-45fe-8183-768a7d7b500f NetNS:/var/run/netns/799133c0-ed3f-42e0-880b-df1875f65f9f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.624952972Z" level=info msg="Deleting pod kube-system_registry-proxy-6c5b7 from CNI network \"kindnet\" (type=ptp)"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.652436700Z" level=info msg="Stopped pod sandbox: 4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287" id=edb351f5-2cc8-4c12-8d67-f1d2d08aba4d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.759662906Z" level=info msg="Stopped container df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08: kube-system/registry-d94xj/registry" id=bae6c021-e392-439b-8ce2-52128f69f66d name=/runtime.v1.RuntimeService/StopContainer
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.760677899Z" level=info msg="Stopping pod sandbox: 0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a" id=5e15cdab-579d-43f1-8929-148e307e3713 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.760938412Z" level=info msg="Got pod network &{Name:registry-d94xj Namespace:kube-system ID:0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a UID:3b4e0792-a45f-41f1-911a-36c1609f1e26 NetNS:/var/run/netns/9be230f8-3202-4418-9e37-f4d8e78d605e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.761072804Z" level=info msg="Deleting pod kube-system_registry-d94xj from CNI network \"kindnet\" (type=ptp)"
	Jun 05 17:36:44 addons-735995 crio[892]: time="2023-06-05 17:36:44.804842613Z" level=info msg="Stopped pod sandbox: 0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a" id=5e15cdab-579d-43f1-8929-148e307e3713 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:45 addons-735995 crio[892]: time="2023-06-05 17:36:45.529260276Z" level=info msg="Removing container: df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08" id=8ba75d58-861a-4100-b4b0-bb97a5f05b18 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 17:36:45 addons-735995 crio[892]: time="2023-06-05 17:36:45.564179453Z" level=info msg="Removed container df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08: kube-system/registry-d94xj/registry" id=8ba75d58-861a-4100-b4b0-bb97a5f05b18 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 17:36:45 addons-735995 crio[892]: time="2023-06-05 17:36:45.566031583Z" level=info msg="Removing container: 10175b9c665c654d4d7e7f6171f554904cbdb2bc2d9f2d1af8f7827c2401c2f2" id=75295602-b616-4b27-9d41-21bc1d4647c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 17:36:45 addons-735995 crio[892]: time="2023-06-05 17:36:45.601472985Z" level=info msg="Removed container 10175b9c665c654d4d7e7f6171f554904cbdb2bc2d9f2d1af8f7827c2401c2f2: kube-system/registry-proxy-6c5b7/registry-proxy" id=75295602-b616-4b27-9d41-21bc1d4647c4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	731580839d70d       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                                             12 seconds ago      Exited              hello-world-app                          1                   fb983610bdce8       hello-world-app-65bdb79f98-p8crq
	560ed9ed52e7e       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             55 seconds ago      Exited              minikube-ingress-dns                     5                   228551d5508c7       kube-ingress-dns-minikube
	61a742d7a586b       docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328                                              2 minutes ago       Running             nginx                                    0                   be994479502e2       nginx
	5e45c9b6b5fc6       registry.k8s.io/ingress-nginx/controller@sha256:28e4b55899689e0af10b7204f0f76ce2a2941febfd73f59983749cb13bca6f96                             3 minutes ago       Running             controller                               0                   140b2bf0ae50b       ingress-nginx-controller-7b4698b8c7-t9x2c
	f7687af86ab66       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	e284fa186f3e5       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	5dfcd8e197d26       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	2f02f9d7c68e6       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	68a1c67a10186       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	74ef54dfc5e27       8f2588812ab2947d53d2f99b11142e2be088330ec67837bb82801c0d3501af78                                                                             3 minutes ago       Exited              patch                                    2                   e183366bb9982       ingress-nginx-admission-patch-28whg
	af6a6d78dabf2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 3 minutes ago       Running             gcp-auth                                 0                   a6754fbd60916       gcp-auth-58478865f7-bplzt
	d3299b4a21313       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b                   3 minutes ago       Exited              create                                   0                   718d9f207e466       ingress-nginx-admission-create-vh2bl
	a2703a1890ed2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago       Running             csi-external-health-monitor-controller   0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	151015d91fcee       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   218c8c1eb9fe1       snapshot-controller-75bbb956b9-4wct2
	6c75e5ca1ca65       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   1f3d27bcd5d4f       snapshot-controller-75bbb956b9-x66wp
	49af45f52a3a0       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   3715291fb45dc       csi-hostpath-attacher-0
	2a5a86c856dcb       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago       Running             csi-resizer                              0                   a0fc9d4f1bf66       csi-hostpath-resizer-0
	2fd39d328683f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   33bff1d31bec0       storage-provisioner
	508b9734603b5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             4 minutes ago       Running             coredns                                  0                   e8b33b73cb473       coredns-5d78c9869d-l5bkd
	5ae220af0a775       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0                                                                             4 minutes ago       Running             kube-proxy                               0                   3112b46084f89       kube-proxy-cvrjb
	9c48170f08535       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                                             4 minutes ago       Running             kindnet-cni                              0                   80b97f2f1fabc       kindnet-n94t6
	afd5bfa3324d4       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840                                                                             5 minutes ago       Running             kube-scheduler                           0                   fc54f1742e68d       kube-scheduler-addons-735995
	07e73cacba03d       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4                                                                             5 minutes ago       Running             kube-controller-manager                  0                   c6a0355f54da6       kube-controller-manager-addons-735995
	6ae1a1fe127bd       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                                             5 minutes ago       Running             etcd                                     0                   3d0e65e686eee       etcd-addons-735995
	fbb09dc418a04       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae                                                                             5 minutes ago       Running             kube-apiserver                           0                   7252c9b135e68       kube-apiserver-addons-735995
	
	* 
	* ==> coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] <==
	* [INFO] 10.244.0.17:59030 - 35447 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076825s
	[INFO] 10.244.0.17:59030 - 412 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058839s
	[INFO] 10.244.0.17:59030 - 31541 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058191s
	[INFO] 10.244.0.17:59030 - 14376 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041715s
	[INFO] 10.244.0.17:59030 - 62096 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001125565s
	[INFO] 10.244.0.17:59030 - 55535 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000895993s
	[INFO] 10.244.0.17:59030 - 51492 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069629s
	[INFO] 10.244.0.17:56240 - 7951 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109161s
	[INFO] 10.244.0.17:56240 - 35937 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049962s
	[INFO] 10.244.0.17:53340 - 42667 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044176s
	[INFO] 10.244.0.17:53340 - 19294 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048558s
	[INFO] 10.244.0.17:53340 - 56075 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048673s
	[INFO] 10.244.0.17:56240 - 54859 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071336s
	[INFO] 10.244.0.17:53340 - 59059 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049936s
	[INFO] 10.244.0.17:53340 - 18944 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043463s
	[INFO] 10.244.0.17:56240 - 30956 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101227s
	[INFO] 10.244.0.17:56240 - 21738 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067619s
	[INFO] 10.244.0.17:53340 - 49215 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097108s
	[INFO] 10.244.0.17:56240 - 1800 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043012s
	[INFO] 10.244.0.17:53340 - 31222 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001279633s
	[INFO] 10.244.0.17:56240 - 63275 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000890881s
	[INFO] 10.244.0.17:56240 - 47075 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001060884s
	[INFO] 10.244.0.17:53340 - 19109 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001003965s
	[INFO] 10.244.0.17:56240 - 37436 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005961s
	[INFO] 10.244.0.17:53340 - 52790 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106905s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-735995
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-735995
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=addons-735995
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T17_31_51_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-735995
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-735995"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 17:31:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-735995
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 17:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:32:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-735995
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 3684da2e34114758b7496e92a206a799
	  System UUID:                3a3b4a4e-f4f2-44f2-83a0-39a8ef621246
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-p8crq             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  gcp-auth                    gcp-auth-58478865f7-bplzt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  ingress-nginx               ingress-nginx-controller-7b4698b8c7-t9x2c    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m37s
	  kube-system                 coredns-5d78c9869d-l5bkd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m43s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 csi-hostpathplugin-jsp8k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 etcd-addons-735995                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kindnet-n94t6                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m43s
	  kube-system                 kube-apiserver-addons-735995                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-controller-manager-addons-735995        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-cvrjb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-scheduler-addons-735995                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 snapshot-controller-75bbb956b9-4wct2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 snapshot-controller-75bbb956b9-x66wp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             310Mi (3%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m38s  kube-proxy       
	  Normal  Starting                 4m56s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s  kubelet          Node addons-735995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s  kubelet          Node addons-735995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s  kubelet          Node addons-735995 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m44s  node-controller  Node addons-735995 event: Registered Node addons-735995 in Controller
	  Normal  NodeReady                4m12s  kubelet          Node addons-735995 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000769] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000df8e4d15
	[  +0.001037] FS-Cache: N-key=[8] '7acfc90000000000'
	[  +0.002865] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000048956af5
	[  +0.001033] FS-Cache: O-key=[8] '7acfc90000000000'
	[  +0.000786] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000f0906f1d
	[  +0.001019] FS-Cache: N-key=[8] '7acfc90000000000'
	[  +2.251230] FS-Cache: Duplicate cookie detected
	[  +0.000694] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000954] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000b0ed0c6e
	[  +0.001110] FS-Cache: O-key=[8] '79cfc90000000000'
	[  +0.000701] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000df8e4d15
	[  +0.001059] FS-Cache: N-key=[8] '79cfc90000000000'
	[  +0.398785] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001039] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000006ec03a4d
	[  +0.001323] FS-Cache: O-key=[8] '82cfc90000000000'
	[  +0.000865] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000006926a378
	[  +0.001086] FS-Cache: N-key=[8] '82cfc90000000000'
	[Jun 5 16:26] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] <==
	* {"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-05T17:31:43.447Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-06-05T17:31:44.227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.232Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-735995 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-05T17:31:44.232Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T17:31:44.233Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-05T17:31:44.235Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T17:31:44.243Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-06-05T17:31:44.248Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:31:44.251Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:32:07.870Z","caller":"traceutil/trace.go:171","msg":"trace[1942935828] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"111.703651ms","start":"2023-06-05T17:32:07.759Z","end":"2023-06-05T17:32:07.870Z","steps":["trace[1942935828] 'process raft request'  (duration: 63.224955ms)","trace[1942935828] 'compare'  (duration: 48.188662ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [af6a6d78dabf25e339e1e02c6946c7e224b0c37221091a858f164ffbdadca047] <==
	* 2023/06/05 17:33:17 GCP Auth Webhook started!
	2023/06/05 17:33:57 Ready to marshal response ...
	2023/06/05 17:33:57 Ready to write response ...
	2023/06/05 17:34:10 Ready to marshal response ...
	2023/06/05 17:34:10 Ready to write response ...
	2023/06/05 17:36:30 Ready to marshal response ...
	2023/06/05 17:36:30 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  17:36:46 up  2:18,  0 users,  load average: 0.39, 1.83, 2.84
	Linux addons-735995 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] <==
	* I0605 17:34:44.343097       1 main.go:227] handling current node
	I0605 17:34:54.353269       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:34:54.353297       1 main.go:227] handling current node
	I0605 17:35:04.359107       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:04.359141       1 main.go:227] handling current node
	I0605 17:35:14.363885       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:14.363912       1 main.go:227] handling current node
	I0605 17:35:24.372175       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:24.372206       1 main.go:227] handling current node
	I0605 17:35:34.382658       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:34.382685       1 main.go:227] handling current node
	I0605 17:35:44.387214       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:44.387240       1 main.go:227] handling current node
	I0605 17:35:54.399260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:54.399291       1 main.go:227] handling current node
	I0605 17:36:04.406679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:04.406785       1 main.go:227] handling current node
	I0605 17:36:14.415458       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:14.415487       1 main.go:227] handling current node
	I0605 17:36:24.426431       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:24.426556       1 main.go:227] handling current node
	I0605 17:36:34.430910       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:34.430938       1 main.go:227] handling current node
	I0605 17:36:44.444033       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:44.444382       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] <==
	* E0605 17:32:34.806365       1 dispatcher.go:206] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.10.58:443: connect: connection refused
	I0605 17:32:47.378264       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.10.69:443: connect: connection refused
	I0605 17:32:47.378287       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0605 17:33:05.546663       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.10.69:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.10.69:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.10.69:443: connect: connection refused
	I0605 17:33:05.697688       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0605 17:33:05.706058       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0605 17:33:47.385594       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0605 17:34:03.763228       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0605 17:34:03.781493       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0605 17:34:04.800420       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0605 17:34:06.624501       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0605 17:34:06.624534       1 handler_proxy.go:100] no RequestInfo found in the context
	E0605 17:34:06.624566       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0605 17:34:06.624580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0605 17:34:06.649951       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0605 17:34:10.143766       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0605 17:34:10.595357       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.98.188.175]
	E0605 17:35:06.624955       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0605 17:35:06.624989       1 handler_proxy.go:100] no RequestInfo found in the context
	E0605 17:35:06.625029       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0605 17:35:06.625037       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0605 17:36:30.664732       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.98.252.150]
	E0605 17:36:46.605287       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400f3311a0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400e626050), ResponseWriter:(*httpsnoop.rw)(0x400e626050), Flusher:(*httpsnoop.rw)(0x400e626050), CloseNotifier:(*httpsnoop.rw)(0x400e626050), Pusher:(*httpsnoop.rw)(0x400e626050)}}, encoder:(*versioning.codec)(0x400faefd60), memAllocator:(*runtime.Allocator)(0x4004b72fc0)})
	
	* 
	* ==> kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] <==
	* I0605 17:33:22.058018       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0605 17:33:36.023214       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0605 17:33:36.052307       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	E0605 17:34:04.802643       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:34:06.098049       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:06.098111       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:34:09.115556       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:09.115670       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:34:12.917658       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:12.917696       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0605 17:34:13.904657       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0605 17:34:22.041006       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:22.041145       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0605 17:34:32.673277       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0605 17:34:32.673453       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 17:34:33.143432       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0605 17:34:33.143500       1 shared_informer.go:318] Caches are synced for garbage collector
	W0605 17:34:45.637592       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:45.637631       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:35:27.543322       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:35:27.543440       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:36:25.742957       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:36:25.742996       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0605 17:36:30.401405       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0605 17:36:30.435755       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-p8crq"
	
	* 
	* ==> kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] <==
	* I0605 17:32:03.938108       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0605 17:32:03.938609       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0605 17:32:03.938703       1 server_others.go:551] "Using iptables proxy"
	I0605 17:32:08.171982       1 server_others.go:190] "Using iptables Proxier"
	I0605 17:32:08.176007       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0605 17:32:08.187518       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0605 17:32:08.187632       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0605 17:32:08.187729       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0605 17:32:08.244193       1 server.go:657] "Version info" version="v1.27.2"
	I0605 17:32:08.244227       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 17:32:08.245927       1 config.go:188] "Starting service config controller"
	I0605 17:32:08.246005       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0605 17:32:08.246127       1 config.go:97] "Starting endpoint slice config controller"
	I0605 17:32:08.246143       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0605 17:32:08.821717       1 config.go:315] "Starting node config controller"
	I0605 17:32:08.821818       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0605 17:32:08.861429       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0605 17:32:08.863146       1 shared_informer.go:318] Caches are synced for service config
	I0605 17:32:08.922792       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] <==
	* W0605 17:31:48.061857       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0605 17:31:48.062634       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0605 17:31:48.061888       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0605 17:31:48.062726       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0605 17:31:48.061917       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0605 17:31:48.062810       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0605 17:31:48.061956       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0605 17:31:48.062901       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0605 17:31:48.062026       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0605 17:31:48.062988       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0605 17:31:48.062065       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0605 17:31:48.063081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0605 17:31:48.062206       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0605 17:31:48.063172       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0605 17:31:48.065682       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0605 17:31:48.065798       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0605 17:31:48.066153       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0605 17:31:48.066228       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0605 17:31:48.066325       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0605 17:31:48.066379       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0605 17:31:48.066479       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0605 17:31:48.066517       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0605 17:31:48.066606       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0605 17:31:48.066657       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0605 17:31:49.155440       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 05 17:36:41 addons-735995 kubelet[1365]: W0605 17:36:41.313337    1365 container.go:586] Failed to update stats for container "/crio/crio-1c854ed38bc610e6f2c9d8fb2b5b24fc03549b46bd5f0224bbd38dd414233464": unable to determine device info for dir: /var/lib/containers/storage/overlay/50d56cd7af445ca57d92c4ad47ed7b7623c66a73b597a778b61e6bc623a53458/diff: stat failed on /var/lib/containers/storage/overlay/50d56cd7af445ca57d92c4ad47ed7b7623c66a73b597a778b61e6bc623a53458/diff with error: no such file or directory, continuing to push stats
	Jun 05 17:36:41 addons-735995 kubelet[1365]: I0605 17:36:41.856699    1365 kubelet_pods.go:894] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-6c5b7" secret="" err="secret \"gcp-auth\" not found"
	Jun 05 17:36:41 addons-735995 kubelet[1365]: I0605 17:36:41.856738    1365 scope.go:115] "RemoveContainer" containerID="10175b9c665c654d4d7e7f6171f554904cbdb2bc2d9f2d1af8f7827c2401c2f2"
	Jun 05 17:36:41 addons-735995 kubelet[1365]: E0605 17:36:41.856971    1365 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-6c5b7_kube-system(542106f4-ef94-45fe-8183-768a7d7b500f)\"" pod="kube-system/registry-proxy-6c5b7" podUID=542106f4-ef94-45fe-8183-768a7d7b500f
	Jun 05 17:36:42 addons-735995 kubelet[1365]: I0605 17:36:42.856770    1365 kubelet_pods.go:894] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-d94xj" secret="" err="secret \"gcp-auth\" not found"
	Jun 05 17:36:42 addons-735995 kubelet[1365]: I0605 17:36:42.856841    1365 scope.go:115] "RemoveContainer" containerID="560ed9ed52e7efd905db3a986a263bdbd514a42a66070a4eddec6f5083d6a6b2"
	Jun 05 17:36:42 addons-735995 kubelet[1365]: E0605 17:36:42.857105    1365 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(caadaba2-93ce-42a1-8339-fc8d5e28c44a)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=caadaba2-93ce-42a1-8339-fc8d5e28c44a
	Jun 05 17:36:44 addons-735995 kubelet[1365]: I0605 17:36:44.707971    1365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkf9x\" (UniqueName: \"kubernetes.io/projected/542106f4-ef94-45fe-8183-768a7d7b500f-kube-api-access-gkf9x\") pod \"542106f4-ef94-45fe-8183-768a7d7b500f\" (UID: \"542106f4-ef94-45fe-8183-768a7d7b500f\") "
	Jun 05 17:36:44 addons-735995 kubelet[1365]: I0605 17:36:44.720791    1365 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/542106f4-ef94-45fe-8183-768a7d7b500f-kube-api-access-gkf9x" (OuterVolumeSpecName: "kube-api-access-gkf9x") pod "542106f4-ef94-45fe-8183-768a7d7b500f" (UID: "542106f4-ef94-45fe-8183-768a7d7b500f"). InnerVolumeSpecName "kube-api-access-gkf9x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 05 17:36:44 addons-735995 kubelet[1365]: I0605 17:36:44.808774    1365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gkf9x\" (UniqueName: \"kubernetes.io/projected/542106f4-ef94-45fe-8183-768a7d7b500f-kube-api-access-gkf9x\") on node \"addons-735995\" DevicePath \"\""
	Jun 05 17:36:44 addons-735995 kubelet[1365]: I0605 17:36:44.909430    1365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpm7h\" (UniqueName: \"kubernetes.io/projected/3b4e0792-a45f-41f1-911a-36c1609f1e26-kube-api-access-bpm7h\") pod \"3b4e0792-a45f-41f1-911a-36c1609f1e26\" (UID: \"3b4e0792-a45f-41f1-911a-36c1609f1e26\") "
	Jun 05 17:36:44 addons-735995 kubelet[1365]: I0605 17:36:44.912375    1365 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4e0792-a45f-41f1-911a-36c1609f1e26-kube-api-access-bpm7h" (OuterVolumeSpecName: "kube-api-access-bpm7h") pod "3b4e0792-a45f-41f1-911a-36c1609f1e26" (UID: "3b4e0792-a45f-41f1-911a-36c1609f1e26"). InnerVolumeSpecName "kube-api-access-bpm7h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 05 17:36:45 addons-735995 kubelet[1365]: I0605 17:36:45.016346    1365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bpm7h\" (UniqueName: \"kubernetes.io/projected/3b4e0792-a45f-41f1-911a-36c1609f1e26-kube-api-access-bpm7h\") on node \"addons-735995\" DevicePath \"\""
	Jun 05 17:36:45 addons-735995 kubelet[1365]: I0605 17:36:45.527421    1365 scope.go:115] "RemoveContainer" containerID="df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08"
	Jun 05 17:36:45 addons-735995 kubelet[1365]: I0605 17:36:45.564474    1365 scope.go:115] "RemoveContainer" containerID="df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08"
	Jun 05 17:36:45 addons-735995 kubelet[1365]: E0605 17:36:45.564884    1365 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08\": container with ID starting with df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08 not found: ID does not exist" containerID="df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08"
	Jun 05 17:36:45 addons-735995 kubelet[1365]: I0605 17:36:45.564923    1365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08} err="failed to get container status \"df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08\": rpc error: code = NotFound desc = could not find container \"df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08\": container with ID starting with df61eeead67fc6fb9115287e4e0e6e294777ac50e6fceecb6fa516c5f11b2e08 not found: ID does not exist"
	Jun 05 17:36:45 addons-735995 kubelet[1365]: I0605 17:36:45.564936    1365 scope.go:115] "RemoveContainer" containerID="10175b9c665c654d4d7e7f6171f554904cbdb2bc2d9f2d1af8f7827c2401c2f2"
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.426333    1365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxd5v\" (UniqueName: \"kubernetes.io/projected/caadaba2-93ce-42a1-8339-fc8d5e28c44a-kube-api-access-qxd5v\") pod \"caadaba2-93ce-42a1-8339-fc8d5e28c44a\" (UID: \"caadaba2-93ce-42a1-8339-fc8d5e28c44a\") "
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.429816    1365 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caadaba2-93ce-42a1-8339-fc8d5e28c44a-kube-api-access-qxd5v" (OuterVolumeSpecName: "kube-api-access-qxd5v") pod "caadaba2-93ce-42a1-8339-fc8d5e28c44a" (UID: "caadaba2-93ce-42a1-8339-fc8d5e28c44a"). InnerVolumeSpecName "kube-api-access-qxd5v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.526992    1365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qxd5v\" (UniqueName: \"kubernetes.io/projected/caadaba2-93ce-42a1-8339-fc8d5e28c44a-kube-api-access-qxd5v\") on node \"addons-735995\" DevicePath \"\""
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.534389    1365 scope.go:115] "RemoveContainer" containerID="560ed9ed52e7efd905db3a986a263bdbd514a42a66070a4eddec6f5083d6a6b2"
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.862556    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3b4e0792-a45f-41f1-911a-36c1609f1e26 path="/var/lib/kubelet/pods/3b4e0792-a45f-41f1-911a-36c1609f1e26/volumes"
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.863639    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=542106f4-ef94-45fe-8183-768a7d7b500f path="/var/lib/kubelet/pods/542106f4-ef94-45fe-8183-768a7d7b500f/volumes"
	Jun 05 17:36:46 addons-735995 kubelet[1365]: I0605 17:36:46.865693    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=caadaba2-93ce-42a1-8339-fc8d5e28c44a path="/var/lib/kubelet/pods/caadaba2-93ce-42a1-8339-fc8d5e28c44a/volumes"
	
	* 
	* ==> storage-provisioner [2fd39d328683f03e182fccf5ceecc92e929532641c49212c34a60ad5f49c1998] <==
	* I0605 17:32:35.827758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0605 17:32:35.855643       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0605 17:32:35.855745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0605 17:32:35.883761       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0605 17:32:35.884218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-735995_2cc398ac-5fbd-4dab-8e61-87cf5b348e7a!
	I0605 17:32:35.886509       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48e22060-8807-4fde-933f-4dc9cf03e09c", APIVersion:"v1", ResourceVersion:"804", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-735995_2cc398ac-5fbd-4dab-8e61-87cf5b348e7a became leader
	I0605 17:32:35.985319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-735995_2cc398ac-5fbd-4dab-8e61-87cf5b348e7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-735995 -n addons-735995
helpers_test.go:261: (dbg) Run:  kubectl --context addons-735995 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (180.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (168.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-735995 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-735995 replace --force -f testdata/nginx-ingress-v1.yaml
2023/06/05 17:34:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
addons_test.go:221: (dbg) Run:  kubectl --context addons-735995 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3b35c648-b81e-49f0-abab-31629338bebb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3b35c648-b81e-49f0-abab-31629338bebb] Running
2023/06/05 17:34:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:17 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:34:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:34:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.011702552s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2023/06/05 17:34:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:34:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:34:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:33 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:34:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:34:34 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:34 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/06/05 17:34:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:34:40 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:40 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:34:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:49 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:34:49 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:49 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:34:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:50 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/06/05 17:34:52 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:52 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:34:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:56 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:35:04 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:04 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:35:04 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:04 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:35:05 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:05 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/06/05 17:35:07 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:07 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:35:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:35:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:22 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:35:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:35:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:23 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/06/05 17:35:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:35:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:35:37 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:41 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:35:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:35:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/06/05 17:35:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:35:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:35:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:35:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:01 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:36:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2023/06/05 17:36:02 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:02 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2023/06/05 17:36:04 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:04 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:36:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:36:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:29 [DEBUG] GET http://192.168.49.2:5000
2023/06/05 17:36:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-735995 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.529431793s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-735995 replace --force -f testdata/ingress-dns-example-v1.yaml
2023/06/05 17:36:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:30 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
2023/06/05 17:36:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2023/06/05 17:36:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:36:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2023/06/05 17:36:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.042182869s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-735995 addons disable ingress --alsologtostderr -v=1: (7.668526515s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-735995
helpers_test.go:235: (dbg) docker inspect addons-735995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee",
	        "Created": "2023-06-05T17:31:22.465496878Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 408780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T17:31:22.784389528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/hostname",
	        "HostsPath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/hosts",
	        "LogPath": "/var/lib/docker/containers/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee-json.log",
	        "Name": "/addons-735995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-735995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-735995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f75dc16798ef17a9ee419c8cbed7f80b10986520aedeaf4741b2810dc0f0ff3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-735995",
	                "Source": "/var/lib/docker/volumes/addons-735995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-735995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-735995",
	                "name.minikube.sigs.k8s.io": "addons-735995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3286cc24a1af956fce6ba6162fcacaa3d0c7bb789e5ed3106b69f6620cc75322",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3286cc24a1af",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-735995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d36a4170624d",
	                        "addons-735995"
	                    ],
	                    "NetworkID": "0b90d709d07d267fe9ad697ed6f8beb09db82befd8b2368e245ec4b456227819",
	                    "EndpointID": "2ce097cda11197b55e493209eedc5dbbd5670bb6c3b2e221f950bbccfbd35e31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-735995 -n addons-735995
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-735995 logs -n 25: (1.710215346s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | -p download-only-535520        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | -p download-only-535520        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| delete  | -p download-only-535520        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| delete  | -p download-only-535520        | download-only-535520   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| start   | --download-only -p             | download-docker-501309 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | download-docker-501309         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-501309      | download-docker-501309 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| start   | --download-only -p             | binary-mirror-444845   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |                     |
	|         | binary-mirror-444845           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43845         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-444845        | binary-mirror-444845   | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:30 UTC |
	| start   | -p addons-735995               | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC | 05 Jun 23 17:33 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:33 UTC | 05 Jun 23 17:33 UTC |
	|         | addons-735995                  |                        |         |         |                     |                     |
	| addons  | addons-735995 addons           | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:33 UTC | 05 Jun 23 17:33 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-735995 ip               | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:34 UTC | 05 Jun 23 17:34 UTC |
	| addons  | disable inspektor-gadget -p    | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:34 UTC | 05 Jun 23 17:34 UTC |
	|         | addons-735995                  |                        |         |         |                     |                     |
	| ssh     | addons-735995 ssh curl -s      | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-735995 ip               | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:36 UTC | 05 Jun 23 17:36 UTC |
	| addons  | addons-735995 addons disable   | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:36 UTC | 05 Jun 23 17:36 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-735995 addons disable   | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:36 UTC | 05 Jun 23 17:36 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-735995 addons disable   | addons-735995          | jenkins | v1.30.1 | 05 Jun 23 17:36 UTC | 05 Jun 23 17:36 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 17:30:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 17:30:59.326987  408313 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:30:59.327145  408313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:59.327155  408313 out.go:309] Setting ErrFile to fd 2...
	I0605 17:30:59.327161  408313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:59.327320  408313 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:30:59.327767  408313 out.go:303] Setting JSON to false
	I0605 17:30:59.328844  408313 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7992,"bootTime":1685978268,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:30:59.328914  408313 start.go:137] virtualization:  
	I0605 17:30:59.331813  408313 out.go:177] * [addons-735995] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:30:59.334799  408313 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 17:30:59.336710  408313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:30:59.334995  408313 notify.go:220] Checking for updates...
	I0605 17:30:59.340951  408313 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:30:59.343111  408313 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:30:59.345022  408313 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 17:30:59.346828  408313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 17:30:59.349158  408313 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:30:59.375159  408313 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:30:59.375251  408313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:59.453762  408313 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-05 17:30:59.442899443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:59.453870  408313 docker.go:294] overlay module found
	I0605 17:30:59.456139  408313 out.go:177] * Using the docker driver based on user configuration
	I0605 17:30:59.457898  408313 start.go:297] selected driver: docker
	I0605 17:30:59.457936  408313 start.go:875] validating driver "docker" against <nil>
	I0605 17:30:59.457964  408313 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 17:30:59.458608  408313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:59.529284  408313 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-05 17:30:59.519742281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:59.529453  408313 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0605 17:30:59.529683  408313 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 17:30:59.531469  408313 out.go:177] * Using Docker driver with root privileges
	I0605 17:30:59.533447  408313 cni.go:84] Creating CNI manager for ""
	I0605 17:30:59.533462  408313 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:30:59.533472  408313 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0605 17:30:59.533491  408313 start_flags.go:319] config:
	{Name:addons-735995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:30:59.535686  408313 out.go:177] * Starting control plane node addons-735995 in cluster addons-735995
	I0605 17:30:59.537248  408313 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:30:59.538768  408313 out.go:177] * Pulling base image ...
	I0605 17:30:59.540391  408313 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:30:59.540448  408313 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 17:30:59.540460  408313 cache.go:57] Caching tarball of preloaded images
	I0605 17:30:59.540467  408313 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:30:59.540540  408313 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 17:30:59.540551  408313 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 17:30:59.540905  408313 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/config.json ...
	I0605 17:30:59.540937  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/config.json: {Name:mk3fe78a0ad294e23755d3263268d2e6984b6994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:30:59.557974  408313 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f to local cache
	I0605 17:30:59.558089  408313 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory
	I0605 17:30:59.558110  408313 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory, skipping pull
	I0605 17:30:59.558115  408313 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in cache, skipping pull
	I0605 17:30:59.558122  408313 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f as a tarball
	I0605 17:30:59.558127  408313 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f from local cache
	I0605 17:31:15.021849  408313 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f from cached tarball
	I0605 17:31:15.021886  408313 cache.go:195] Successfully downloaded all kic artifacts
	I0605 17:31:15.021936  408313 start.go:364] acquiring machines lock for addons-735995: {Name:mk0ceb74f7c7ec6a93eb00c47587bcbeb49c1769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 17:31:15.023182  408313 start.go:368] acquired machines lock for "addons-735995" in 1.21141ms
	I0605 17:31:15.023249  408313 start.go:93] Provisioning new machine with config: &{Name:addons-735995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:31:15.023348  408313 start.go:125] createHost starting for "" (driver="docker")
	I0605 17:31:15.026141  408313 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0605 17:31:15.026472  408313 start.go:159] libmachine.API.Create for "addons-735995" (driver="docker")
	I0605 17:31:15.026504  408313 client.go:168] LocalClient.Create starting
	I0605 17:31:15.026641  408313 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem
	I0605 17:31:15.495319  408313 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem
	I0605 17:31:15.868885  408313 cli_runner.go:164] Run: docker network inspect addons-735995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0605 17:31:15.889980  408313 cli_runner.go:211] docker network inspect addons-735995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0605 17:31:15.890061  408313 network_create.go:281] running [docker network inspect addons-735995] to gather additional debugging logs...
	I0605 17:31:15.890082  408313 cli_runner.go:164] Run: docker network inspect addons-735995
	W0605 17:31:15.908355  408313 cli_runner.go:211] docker network inspect addons-735995 returned with exit code 1
	I0605 17:31:15.908389  408313 network_create.go:284] error running [docker network inspect addons-735995]: docker network inspect addons-735995: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-735995 not found
	I0605 17:31:15.908401  408313 network_create.go:286] output of [docker network inspect addons-735995]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-735995 not found
	
	** /stderr **
	I0605 17:31:15.908481  408313 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:31:15.929143  408313 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400166a8e0}
	I0605 17:31:15.929186  408313 network_create.go:123] attempt to create docker network addons-735995 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0605 17:31:15.929242  408313 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-735995 addons-735995
	I0605 17:31:16.021244  408313 network_create.go:107] docker network addons-735995 192.168.49.0/24 created
	I0605 17:31:16.021276  408313 kic.go:117] calculated static IP "192.168.49.2" for the "addons-735995" container
	I0605 17:31:16.021362  408313 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0605 17:31:16.040896  408313 cli_runner.go:164] Run: docker volume create addons-735995 --label name.minikube.sigs.k8s.io=addons-735995 --label created_by.minikube.sigs.k8s.io=true
	I0605 17:31:16.059628  408313 oci.go:103] Successfully created a docker volume addons-735995
	I0605 17:31:16.059728  408313 cli_runner.go:164] Run: docker run --rm --name addons-735995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-735995 --entrypoint /usr/bin/test -v addons-735995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib
	I0605 17:31:18.191403  408313 cli_runner.go:217] Completed: docker run --rm --name addons-735995-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-735995 --entrypoint /usr/bin/test -v addons-735995:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib: (2.131623299s)
	I0605 17:31:18.191436  408313 oci.go:107] Successfully prepared a docker volume addons-735995
	I0605 17:31:18.191458  408313 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:31:18.191477  408313 kic.go:190] Starting extracting preloaded images to volume ...
	I0605 17:31:18.191563  408313 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-735995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir
	I0605 17:31:22.388961  408313 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-735995:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir: (4.197339272s)
	I0605 17:31:22.388999  408313 kic.go:199] duration metric: took 4.197518 seconds to extract preloaded images to volume
	W0605 17:31:22.389149  408313 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0605 17:31:22.389264  408313 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0605 17:31:22.449364  408313 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-735995 --name addons-735995 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-735995 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-735995 --network addons-735995 --ip 192.168.49.2 --volume addons-735995:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f
	I0605 17:31:22.794143  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Running}}
	I0605 17:31:22.830592  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:31:22.857396  408313 cli_runner.go:164] Run: docker exec addons-735995 stat /var/lib/dpkg/alternatives/iptables
	I0605 17:31:22.957583  408313 oci.go:144] the created container "addons-735995" has a running status.
	I0605 17:31:22.957609  408313 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa...
	I0605 17:31:23.187062  408313 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0605 17:31:23.216782  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:31:23.255052  408313 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0605 17:31:23.255075  408313 kic_runner.go:114] Args: [docker exec --privileged addons-735995 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0605 17:31:23.350962  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:31:23.377023  408313 machine.go:88] provisioning docker machine ...
	I0605 17:31:23.377058  408313 ubuntu.go:169] provisioning hostname "addons-735995"
	I0605 17:31:23.377129  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:23.403693  408313 main.go:141] libmachine: Using SSH client type: native
	I0605 17:31:23.404158  408313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0605 17:31:23.404171  408313 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-735995 && echo "addons-735995" | sudo tee /etc/hostname
	I0605 17:31:23.404920  408313 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60084->127.0.0.1:33113: read: connection reset by peer
	I0605 17:31:26.559791  408313 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-735995
	
	I0605 17:31:26.559873  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:26.579289  408313 main.go:141] libmachine: Using SSH client type: native
	I0605 17:31:26.579725  408313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0605 17:31:26.579742  408313 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-735995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-735995/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-735995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 17:31:26.721528  408313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 17:31:26.721554  408313 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 17:31:26.721581  408313 ubuntu.go:177] setting up certificates
	I0605 17:31:26.721602  408313 provision.go:83] configureAuth start
	I0605 17:31:26.721674  408313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-735995
	I0605 17:31:26.740476  408313 provision.go:138] copyHostCerts
	I0605 17:31:26.740562  408313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 17:31:26.740686  408313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 17:31:26.740748  408313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 17:31:26.740794  408313 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.addons-735995 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-735995]
	I0605 17:31:28.545326  408313 provision.go:172] copyRemoteCerts
	I0605 17:31:28.545455  408313 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 17:31:28.545507  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:28.564191  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:28.667638  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 17:31:28.699391  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0605 17:31:28.734001  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 17:31:28.763699  408313 provision.go:86] duration metric: configureAuth took 2.0420591s
	I0605 17:31:28.763725  408313 ubuntu.go:193] setting minikube options for container-runtime
	I0605 17:31:28.763940  408313 config.go:182] Loaded profile config "addons-735995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:31:28.764043  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:28.782569  408313 main.go:141] libmachine: Using SSH client type: native
	I0605 17:31:28.783018  408313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0605 17:31:28.783042  408313 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 17:31:29.049202  408313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 17:31:29.049228  408313 machine.go:91] provisioned docker machine in 5.672186833s
	I0605 17:31:29.049238  408313 client.go:171] LocalClient.Create took 14.022724195s
	I0605 17:31:29.049250  408313 start.go:167] duration metric: libmachine.API.Create for "addons-735995" took 14.022779398s
	I0605 17:31:29.049257  408313 start.go:300] post-start starting for "addons-735995" (driver="docker")
	I0605 17:31:29.049267  408313 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 17:31:29.049333  408313 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 17:31:29.049383  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.068302  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.172863  408313 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 17:31:29.177506  408313 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 17:31:29.177556  408313 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 17:31:29.177567  408313 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 17:31:29.177580  408313 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 17:31:29.177593  408313 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 17:31:29.177668  408313 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 17:31:29.177694  408313 start.go:303] post-start completed in 128.427495ms
	I0605 17:31:29.178017  408313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-735995
	I0605 17:31:29.196617  408313 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/config.json ...
	I0605 17:31:29.196900  408313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:31:29.196950  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.214816  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.310396  408313 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 17:31:29.316254  408313 start.go:128] duration metric: createHost completed in 14.29287158s
	I0605 17:31:29.316278  408313 start.go:83] releasing machines lock for "addons-735995", held for 14.293062086s
	I0605 17:31:29.316354  408313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-735995
	I0605 17:31:29.337814  408313 ssh_runner.go:195] Run: cat /version.json
	I0605 17:31:29.337913  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.338180  408313 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 17:31:29.338241  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:31:29.358043  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.364023  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:31:29.452740  408313 ssh_runner.go:195] Run: systemctl --version
	I0605 17:31:29.598017  408313 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 17:31:29.747556  408313 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 17:31:29.753567  408313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:31:29.777352  408313 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 17:31:29.777431  408313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:31:29.817079  408313 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0605 17:31:29.817104  408313 start.go:481] detecting cgroup driver to use...
	I0605 17:31:29.817158  408313 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 17:31:29.817223  408313 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 17:31:29.836008  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 17:31:29.850278  408313 docker.go:193] disabling cri-docker service (if available) ...
	I0605 17:31:29.850348  408313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 17:31:29.867195  408313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 17:31:29.885141  408313 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 17:31:29.977057  408313 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 17:31:30.113485  408313 docker.go:209] disabling docker service ...
	I0605 17:31:30.113636  408313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 17:31:30.140950  408313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 17:31:30.156959  408313 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 17:31:30.259855  408313 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 17:31:30.366361  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 17:31:30.380895  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 17:31:30.401514  408313 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0605 17:31:30.401579  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.413859  408313 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 17:31:30.413926  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.426202  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.438818  408313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:31:30.451649  408313 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 17:31:30.463281  408313 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 17:31:30.475020  408313 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 17:31:30.485702  408313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 17:31:30.577741  408313 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 17:31:30.707083  408313 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 17:31:30.707227  408313 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 17:31:30.712163  408313 start.go:549] Will wait 60s for crictl version
	I0605 17:31:30.712270  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:31:30.716864  408313 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 17:31:30.764695  408313 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 17:31:30.764848  408313 ssh_runner.go:195] Run: crio --version
	I0605 17:31:30.808420  408313 ssh_runner.go:195] Run: crio --version
	I0605 17:31:30.859133  408313 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0605 17:31:30.861500  408313 cli_runner.go:164] Run: docker network inspect addons-735995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:31:30.880649  408313 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0605 17:31:30.885643  408313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:31:30.900519  408313 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:31:30.900595  408313 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:31:30.968016  408313 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 17:31:30.968041  408313 crio.go:415] Images already preloaded, skipping extraction
	I0605 17:31:30.968097  408313 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:31:31.011955  408313 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 17:31:31.011975  408313 cache_images.go:84] Images are preloaded, skipping loading
	I0605 17:31:31.012050  408313 ssh_runner.go:195] Run: crio config
	I0605 17:31:31.070997  408313 cni.go:84] Creating CNI manager for ""
	I0605 17:31:31.071019  408313 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:31:31.071029  408313 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 17:31:31.071078  408313 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-735995 NodeName:addons-735995 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 17:31:31.071265  408313 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-735995"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 17:31:31.071364  408313 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-735995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 17:31:31.071490  408313 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0605 17:31:31.084446  408313 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 17:31:31.084548  408313 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 17:31:31.096019  408313 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0605 17:31:31.118976  408313 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 17:31:31.142228  408313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0605 17:31:31.164643  408313 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0605 17:31:31.169694  408313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:31:31.183684  408313 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995 for IP: 192.168.49.2
	I0605 17:31:31.183715  408313 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.184373  408313 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 17:31:31.530650  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt ...
	I0605 17:31:31.530681  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt: {Name:mkf49f4d39ebeac83c30991cc1274d93bb2ecfd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.530877  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key ...
	I0605 17:31:31.530890  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key: {Name:mk1b94a487155252cc57cad80ff80c092402ff2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.531572  408313 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 17:31:31.836626  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt ...
	I0605 17:31:31.836656  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt: {Name:mk4460f0a8ac3fe54bd8e18f0dd4ba041104b31f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.836859  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key ...
	I0605 17:31:31.836873  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key: {Name:mkdd992c9bdc4ae6fcee640dafcd67541c1b69de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:31.837001  408313 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.key
	I0605 17:31:31.837019  408313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt with IP's: []
	I0605 17:31:32.526900  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt ...
	I0605 17:31:32.526929  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: {Name:mk7ddd7bc5b092db3126d3aab300b4f0c0cef595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.527121  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.key ...
	I0605 17:31:32.527133  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.key: {Name:mk566ca60473fb7fbdadb54c09d85de4da3cf711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.527213  408313 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2
	I0605 17:31:32.527233  408313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0605 17:31:32.760633  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2 ...
	I0605 17:31:32.760666  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2: {Name:mkf1dea16d5fc1ea558696eaeb602a863a0d36b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.760847  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2 ...
	I0605 17:31:32.760860  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2: {Name:mk0fc779103cc1c6963f333ce8367339ae39a20b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:32.760940  408313 certs.go:337] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt
	I0605 17:31:32.761009  408313 certs.go:341] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key
	I0605 17:31:32.761059  408313 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key
	I0605 17:31:32.761072  408313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt with IP's: []
	I0605 17:31:33.761138  408313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt ...
	I0605 17:31:33.761172  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt: {Name:mkdeaad2a3e905d8816cd9150953f41baa4017a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:33.761451  408313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key ...
	I0605 17:31:33.761466  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key: {Name:mk1d9aba57a07148941aee25cfc5e392e01e2538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:31:33.761688  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 17:31:33.761737  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 17:31:33.761764  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 17:31:33.761795  408313 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 17:31:33.762532  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 17:31:33.792020  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0605 17:31:33.821464  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 17:31:33.851748  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0605 17:31:33.881355  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 17:31:33.910906  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 17:31:33.940541  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 17:31:33.970119  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 17:31:34.000785  408313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 17:31:34.030853  408313 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 17:31:34.052972  408313 ssh_runner.go:195] Run: openssl version
	I0605 17:31:34.060350  408313 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 17:31:34.072424  408313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:31:34.078349  408313 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:31:34.078427  408313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:31:34.087545  408313 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 17:31:34.099851  408313 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 17:31:34.104596  408313 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:31:34.104645  408313 kubeadm.go:404] StartCluster: {Name:addons-735995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-735995 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:31:34.104739  408313 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 17:31:34.104808  408313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 17:31:34.148028  408313 cri.go:88] found id: ""
	I0605 17:31:34.148101  408313 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0605 17:31:34.159154  408313 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0605 17:31:34.170184  408313 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0605 17:31:34.170248  408313 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0605 17:31:34.181346  408313 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0605 17:31:34.181415  408313 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0605 17:31:34.237284  408313 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0605 17:31:34.237542  408313 kubeadm.go:322] [preflight] Running pre-flight checks
	I0605 17:31:34.285270  408313 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0605 17:31:34.285392  408313 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0605 17:31:34.285452  408313 kubeadm.go:322] OS: Linux
	I0605 17:31:34.285534  408313 kubeadm.go:322] CGROUPS_CPU: enabled
	I0605 17:31:34.285619  408313 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0605 17:31:34.285701  408313 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0605 17:31:34.285781  408313 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0605 17:31:34.285848  408313 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0605 17:31:34.285938  408313 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0605 17:31:34.286012  408313 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0605 17:31:34.286084  408313 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0605 17:31:34.286160  408313 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0605 17:31:34.365365  408313 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0605 17:31:34.365542  408313 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0605 17:31:34.365670  408313 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0605 17:31:34.618552  408313 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0605 17:31:34.620879  408313 out.go:204]   - Generating certificates and keys ...
	I0605 17:31:34.621110  408313 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0605 17:31:34.621224  408313 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0605 17:31:34.892810  408313 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0605 17:31:35.470339  408313 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0605 17:31:35.757889  408313 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0605 17:31:36.284435  408313 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0605 17:31:37.117622  408313 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0605 17:31:37.118020  408313 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-735995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 17:31:37.660850  408313 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0605 17:31:37.661149  408313 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-735995 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 17:31:38.104541  408313 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0605 17:31:38.369416  408313 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0605 17:31:39.005320  408313 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0605 17:31:39.005388  408313 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0605 17:31:39.364374  408313 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0605 17:31:39.625107  408313 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0605 17:31:40.003056  408313 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0605 17:31:40.278570  408313 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0605 17:31:40.289884  408313 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 17:31:40.291596  408313 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 17:31:40.291651  408313 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0605 17:31:40.406773  408313 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0605 17:31:40.409129  408313 out.go:204]   - Booting up control plane ...
	I0605 17:31:40.409251  408313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0605 17:31:40.410794  408313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0605 17:31:40.411864  408313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0605 17:31:40.413012  408313 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0605 17:31:40.416391  408313 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0605 17:31:49.420046  408313 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002687 seconds
	I0605 17:31:49.420161  408313 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0605 17:31:49.438114  408313 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0605 17:31:49.964100  408313 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0605 17:31:49.964320  408313 kubeadm.go:322] [mark-control-plane] Marking the node addons-735995 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0605 17:31:50.476969  408313 kubeadm.go:322] [bootstrap-token] Using token: xd6dl9.7f38bvf10mlyqyhb
	I0605 17:31:50.478682  408313 out.go:204]   - Configuring RBAC rules ...
	I0605 17:31:50.478800  408313 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0605 17:31:50.485673  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0605 17:31:50.495111  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0605 17:31:50.498302  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0605 17:31:50.501809  408313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0605 17:31:50.505752  408313 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0605 17:31:50.519344  408313 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0605 17:31:50.756554  408313 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0605 17:31:50.905287  408313 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0605 17:31:50.906279  408313 kubeadm.go:322] 
	I0605 17:31:50.906351  408313 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0605 17:31:50.906358  408313 kubeadm.go:322] 
	I0605 17:31:50.906430  408313 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0605 17:31:50.906434  408313 kubeadm.go:322] 
	I0605 17:31:50.906458  408313 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0605 17:31:50.906520  408313 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0605 17:31:50.906568  408313 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0605 17:31:50.906573  408313 kubeadm.go:322] 
	I0605 17:31:50.906623  408313 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0605 17:31:50.906627  408313 kubeadm.go:322] 
	I0605 17:31:50.906672  408313 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0605 17:31:50.906678  408313 kubeadm.go:322] 
	I0605 17:31:50.906727  408313 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0605 17:31:50.906797  408313 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0605 17:31:50.906861  408313 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0605 17:31:50.906866  408313 kubeadm.go:322] 
	I0605 17:31:50.906944  408313 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0605 17:31:50.907017  408313 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0605 17:31:50.907021  408313 kubeadm.go:322] 
	I0605 17:31:50.907100  408313 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xd6dl9.7f38bvf10mlyqyhb \
	I0605 17:31:50.907197  408313 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 \
	I0605 17:31:50.907217  408313 kubeadm.go:322] 	--control-plane 
	I0605 17:31:50.907221  408313 kubeadm.go:322] 
	I0605 17:31:50.907301  408313 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0605 17:31:50.907306  408313 kubeadm.go:322] 
	I0605 17:31:50.907382  408313 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xd6dl9.7f38bvf10mlyqyhb \
	I0605 17:31:50.907478  408313 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 
	I0605 17:31:50.909496  408313 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0605 17:31:50.909698  408313 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 17:31:50.909927  408313 kubeadm.go:322] W0605 17:31:34.365252    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:31:50.910153  408313 kubeadm.go:322] W0605 17:31:40.413151    1052 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:31:50.910163  408313 cni.go:84] Creating CNI manager for ""
	I0605 17:31:50.910171  408313 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:31:50.914313  408313 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0605 17:31:50.916493  408313 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0605 17:31:50.943229  408313 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0605 17:31:50.943254  408313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0605 17:31:50.998449  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0605 17:31:51.929427  408313 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0605 17:31:51.929580  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:51.929667  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d minikube.k8s.io/name=addons-735995 minikube.k8s.io/updated_at=2023_06_05T17_31_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:52.137495  408313 ops.go:34] apiserver oom_adj: -16
	I0605 17:31:52.137603  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:52.741871  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:53.242210  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:53.741263  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:54.241925  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:54.741621  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:55.242212  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:55.741393  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:56.242105  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:56.742232  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:57.241428  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:57.741297  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:58.242006  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:58.742239  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:59.241716  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:31:59.741935  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:00.242034  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:00.741215  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:01.242198  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:01.742144  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:02.241299  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:02.741856  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:03.241282  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:03.741286  408313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:32:03.906705  408313 kubeadm.go:1076] duration metric: took 11.977184369s to wait for elevateKubeSystemPrivileges.
	I0605 17:32:03.906735  408313 kubeadm.go:406] StartCluster complete in 29.802094659s
	I0605 17:32:03.906750  408313 settings.go:142] acquiring lock: {Name:mk7ddedb44759cc39266e9c612309013659bd7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:32:03.908158  408313 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:32:03.908575  408313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/kubeconfig: {Name:mkb77de9bf1ac5a664886fbfefd28a762472c016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:32:03.908811  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0605 17:32:03.909133  408313 config.go:182] Loaded profile config "addons-735995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:32:03.909233  408313 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0605 17:32:03.909309  408313 addons.go:66] Setting volumesnapshots=true in profile "addons-735995"
	I0605 17:32:03.909325  408313 addons.go:228] Setting addon volumesnapshots=true in "addons-735995"
	I0605 17:32:03.909380  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.909817  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.911519  408313 addons.go:66] Setting ingress=true in profile "addons-735995"
	I0605 17:32:03.911548  408313 addons.go:228] Setting addon ingress=true in "addons-735995"
	I0605 17:32:03.911604  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.912107  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.912196  408313 addons.go:66] Setting cloud-spanner=true in profile "addons-735995"
	I0605 17:32:03.912212  408313 addons.go:228] Setting addon cloud-spanner=true in "addons-735995"
	I0605 17:32:03.912246  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.912640  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.912724  408313 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-735995"
	I0605 17:32:03.912755  408313 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-735995"
	I0605 17:32:03.912794  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.913171  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.913249  408313 addons.go:66] Setting default-storageclass=true in profile "addons-735995"
	I0605 17:32:03.913267  408313 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-735995"
	I0605 17:32:03.913485  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.913547  408313 addons.go:66] Setting gcp-auth=true in profile "addons-735995"
	I0605 17:32:03.913564  408313 mustload.go:65] Loading cluster: addons-735995
	I0605 17:32:03.913717  408313 config.go:182] Loaded profile config "addons-735995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:32:03.913926  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.913996  408313 addons.go:66] Setting metrics-server=true in profile "addons-735995"
	I0605 17:32:03.914011  408313 addons.go:228] Setting addon metrics-server=true in "addons-735995"
	I0605 17:32:03.914041  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.914453  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.914528  408313 addons.go:66] Setting ingress-dns=true in profile "addons-735995"
	I0605 17:32:03.914544  408313 addons.go:228] Setting addon ingress-dns=true in "addons-735995"
	I0605 17:32:03.914578  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.914941  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.915006  408313 addons.go:66] Setting inspektor-gadget=true in profile "addons-735995"
	I0605 17:32:03.915021  408313 addons.go:228] Setting addon inspektor-gadget=true in "addons-735995"
	I0605 17:32:03.915045  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.915390  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.915456  408313 addons.go:66] Setting registry=true in profile "addons-735995"
	I0605 17:32:03.915471  408313 addons.go:228] Setting addon registry=true in "addons-735995"
	I0605 17:32:03.915495  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.919375  408313 addons.go:66] Setting storage-provisioner=true in profile "addons-735995"
	I0605 17:32:03.919404  408313 addons.go:228] Setting addon storage-provisioner=true in "addons-735995"
	I0605 17:32:03.919445  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:03.919872  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:03.936059  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:04.027856  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0605 17:32:04.049181  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0605 17:32:04.055790  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0605 17:32:04.063265  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0605 17:32:04.079911  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0605 17:32:04.079864  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0605 17:32:04.093984  408313 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0605 17:32:04.096091  408313 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0605 17:32:04.096112  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0605 17:32:04.096177  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.107494  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0605 17:32:04.092808  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.0
	I0605 17:32:04.114998  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0605 17:32:04.115034  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0605 17:32:04.115105  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.133519  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0605 17:32:04.138972  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0605 17:32:04.143511  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0605 17:32:04.147436  408313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0605 17:32:04.162428  408313 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0605 17:32:04.162459  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0605 17:32:04.162519  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.168225  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:04.180057  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0605 17:32:04.180082  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0605 17:32:04.180153  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.180712  408313 addons.go:228] Setting addon default-storageclass=true in "addons-735995"
	I0605 17:32:04.180745  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:04.181156  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:04.211167  408313 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0605 17:32:04.215202  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0605 17:32:04.215228  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0605 17:32:04.215297  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.233593  408313 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0605 17:32:04.238297  408313 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0605 17:32:04.238329  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0605 17:32:04.238398  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.249438  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0605 17:32:04.267270  408313 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0605 17:32:04.270076  408313 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0605 17:32:04.270131  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0605 17:32:04.270213  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.274550  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.334813  408313 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:32:04.338474  408313 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:32:04.338505  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0605 17:32:04.338598  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.377726  408313 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0605 17:32:04.377753  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0605 17:32:04.378075  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.428844  408313 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0605 17:32:04.437420  408313 out.go:177]   - Using image docker.io/registry:2.8.1
	I0605 17:32:04.436055  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.440727  408313 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0605 17:32:04.440750  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0605 17:32:04.440812  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:04.457001  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.460715  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.462319  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.496024  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.516335  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.533667  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.543458  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.559669  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:04.735254  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0605 17:32:04.744278  408313 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-735995" context rescaled to 1 replicas
	I0605 17:32:04.744364  408313 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:32:04.748002  408313 out.go:177] * Verifying Kubernetes components...
	I0605 17:32:04.750502  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:32:04.831308  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0605 17:32:04.859820  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0605 17:32:04.880565  408313 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0605 17:32:04.880591  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0605 17:32:04.933172  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0605 17:32:04.948889  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0605 17:32:04.948957  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0605 17:32:04.971801  408313 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0605 17:32:04.971826  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0605 17:32:04.978838  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:32:04.994189  408313 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0605 17:32:04.994213  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0605 17:32:05.016343  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0605 17:32:05.016372  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0605 17:32:05.066341  408313 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0605 17:32:05.066368  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0605 17:32:05.117912  408313 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0605 17:32:05.117939  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0605 17:32:05.135792  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0605 17:32:05.135819  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0605 17:32:05.218295  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0605 17:32:05.218320  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0605 17:32:05.218552  408313 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0605 17:32:05.218568  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0605 17:32:05.243902  408313 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0605 17:32:05.243940  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0605 17:32:05.329533  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0605 17:32:05.329560  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0605 17:32:05.354598  408313 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0605 17:32:05.354623  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0605 17:32:05.402934  408313 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0605 17:32:05.402956  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0605 17:32:05.412311  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0605 17:32:05.439117  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0605 17:32:05.439182  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0605 17:32:05.466586  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0605 17:32:05.466647  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0605 17:32:05.563834  408313 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0605 17:32:05.563899  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0605 17:32:05.567079  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0605 17:32:05.572356  408313 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 17:32:05.572426  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0605 17:32:05.648593  408313 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0605 17:32:05.648667  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0605 17:32:05.732460  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 17:32:05.750637  408313 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0605 17:32:05.750717  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0605 17:32:05.813022  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0605 17:32:05.813101  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0605 17:32:05.966917  408313 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.717436186s)
	I0605 17:32:05.966946  408313 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0605 17:32:05.973448  408313 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0605 17:32:05.973472  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0605 17:32:06.034893  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0605 17:32:06.034917  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0605 17:32:06.143154  408313 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0605 17:32:06.143183  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0605 17:32:06.206081  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0605 17:32:06.206107  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0605 17:32:06.303708  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0605 17:32:06.363513  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0605 17:32:06.363540  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0605 17:32:06.571218  408313 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0605 17:32:06.571243  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0605 17:32:06.710020  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0605 17:32:07.670579  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.935288991s)
	I0605 17:32:07.670629  408313 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.92005997s)
	I0605 17:32:07.671460  408313 node_ready.go:35] waiting up to 6m0s for node "addons-735995" to be "Ready" ...
	I0605 17:32:08.260141  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.428796516s)
	I0605 17:32:09.573115  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.639863164s)
	I0605 17:32:09.573206  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.5943099s)
	I0605 17:32:09.573246  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.160911039s)
	I0605 17:32:09.573260  408313 addons.go:464] Verifying addon registry=true in "addons-735995"
	I0605 17:32:09.575282  408313 out.go:177] * Verifying registry addon...
	I0605 17:32:09.573388  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.713543443s)
	I0605 17:32:09.573522  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.00638233s)
	I0605 17:32:09.573612  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.841069715s)
	I0605 17:32:09.573685  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.269927502s)
	I0605 17:32:09.577390  408313 addons.go:464] Verifying addon metrics-server=true in "addons-735995"
	I0605 17:32:09.577410  408313 addons.go:464] Verifying addon ingress=true in "addons-735995"
	I0605 17:32:09.580183  408313 out.go:177] * Verifying ingress addon...
	I0605 17:32:09.578392  408313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0605 17:32:09.578425  408313 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0605 17:32:09.583011  408313 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0605 17:32:09.580241  408313 retry.go:31] will retry after 285.689393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0605 17:32:09.590259  408313 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0605 17:32:09.590287  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:09.594777  408313 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0605 17:32:09.594800  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:09.778540  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:09.869132  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0605 17:32:09.884212  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.174140985s)
	I0605 17:32:09.884246  408313 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-735995"
	I0605 17:32:09.888051  408313 out.go:177] * Verifying csi-hostpath-driver addon...
	I0605 17:32:09.890952  408313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0605 17:32:09.915654  408313 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0605 17:32:09.915675  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:10.100007  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:10.108324  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:10.422568  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:10.594648  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:10.603624  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:10.929057  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:11.059311  408313 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0605 17:32:11.059402  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:11.094563  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:11.117108  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:11.117370  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:11.304808  408313 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0605 17:32:11.367083  408313 addons.go:228] Setting addon gcp-auth=true in "addons-735995"
	I0605 17:32:11.367130  408313 host.go:66] Checking if "addons-735995" exists ...
	I0605 17:32:11.367575  408313 cli_runner.go:164] Run: docker container inspect addons-735995 --format={{.State.Status}}
	I0605 17:32:11.394752  408313 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0605 17:32:11.394810  408313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-735995
	I0605 17:32:11.422843  408313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/addons-735995/id_rsa Username:docker}
	I0605 17:32:11.457028  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:11.638009  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:11.644199  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:11.725380  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.856186879s)
	I0605 17:32:11.727997  408313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0605 17:32:11.729921  408313 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0605 17:32:11.732180  408313 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0605 17:32:11.732230  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0605 17:32:11.791008  408313 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0605 17:32:11.791075  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0605 17:32:11.859298  408313 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0605 17:32:11.859376  408313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0605 17:32:11.922672  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:11.926154  408313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0605 17:32:12.095389  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:12.100423  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:12.272919  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:12.421414  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:12.596891  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:12.600315  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:12.942020  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:13.108729  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:13.135275  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:13.375991  408313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.449750988s)
	I0605 17:32:13.378267  408313 addons.go:464] Verifying addon gcp-auth=true in "addons-735995"
	I0605 17:32:13.381429  408313 out.go:177] * Verifying gcp-auth addon...
	I0605 17:32:13.384159  408313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0605 17:32:13.403172  408313 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0605 17:32:13.403245  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:13.447281  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:13.595826  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:13.603990  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:13.907906  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:13.921391  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:14.095863  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:14.100834  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:14.412232  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:14.422319  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:14.596752  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:14.602015  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:14.773721  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:14.907911  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:14.921481  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:15.101374  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:15.105587  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:15.407217  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:15.424830  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:15.595515  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:15.599904  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:15.908424  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:15.931994  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:16.102682  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:16.103610  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:16.407762  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:16.421763  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:16.595667  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:16.600516  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:16.773901  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:16.908919  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:16.925694  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:17.095443  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:17.104332  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:17.408655  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:17.426973  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:17.599893  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:17.605408  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:17.908116  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:17.924415  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:18.095940  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:18.101563  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:18.407317  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:18.421796  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:18.595371  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:18.600905  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:18.908255  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:18.921539  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:19.095057  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:19.101530  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:19.273290  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:19.406959  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:19.421968  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:19.598218  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:19.601533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:19.909392  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:19.923256  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:20.095066  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:20.099354  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:20.407770  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:20.424335  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:20.603184  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:20.615171  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:20.910864  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:20.922563  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:21.095497  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:21.102499  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:21.278213  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:21.408773  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:21.424425  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:21.600538  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:21.603808  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:21.907328  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:21.921252  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:22.096881  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:22.099886  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:22.408473  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:22.424966  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:22.595345  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:22.599893  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:22.907889  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:22.920338  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:23.095368  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:23.100216  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:23.407251  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:23.421464  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:23.595720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:23.600436  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:23.772741  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:23.908858  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:23.928713  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:24.103214  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:24.106217  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:24.408451  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:24.425627  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:24.596696  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:24.601132  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:24.907462  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:24.921327  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:25.095841  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:25.099651  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:25.407420  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:25.420582  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:25.594479  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:25.598960  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:25.907692  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:25.920642  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:26.095317  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:26.099151  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:26.272929  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:26.407235  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:26.420902  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:26.597184  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:26.599663  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:26.907287  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:26.920151  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:27.094720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:27.099530  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:27.406971  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:27.420730  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:27.594168  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:27.598674  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:27.906795  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:27.920194  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:28.094418  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:28.098904  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:28.407594  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:28.420448  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:28.595847  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:28.599784  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:28.772233  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:28.907271  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:28.920128  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:29.094812  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:29.098533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:29.407218  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:29.420264  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:29.594976  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:29.599074  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:29.909507  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:29.922079  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:30.095585  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:30.100985  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:30.410817  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:30.420802  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:30.595451  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:30.598915  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:30.772764  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:30.907448  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:30.920720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:31.095044  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:31.099752  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:31.406645  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:31.420627  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:31.595000  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:31.598865  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:31.907911  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:31.920582  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:32.094882  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:32.098504  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:32.406638  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:32.419720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:32.594704  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:32.599451  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:32.773058  408313 node_ready.go:58] node "addons-735995" has status "Ready":"False"
	I0605 17:32:32.908471  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:32.920788  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:33.094570  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:33.099519  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:33.406964  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:33.420995  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:33.594649  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:33.599306  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:33.907745  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:33.920342  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:34.095066  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:34.099197  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:34.407312  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:34.420147  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:34.608632  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:34.612797  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:34.924142  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:34.954945  408313 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0605 17:32:34.955010  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:35.152004  408313 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0605 17:32:35.152120  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:35.153518  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:35.316700  408313 node_ready.go:49] node "addons-735995" has status "Ready":"True"
	I0605 17:32:35.316770  408313 node_ready.go:38] duration metric: took 27.645275528s waiting for node "addons-735995" to be "Ready" ...
	I0605 17:32:35.316794  408313 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:32:35.388074  408313 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-l5bkd" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:35.423231  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:35.433049  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:35.638224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:35.638495  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:35.908367  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:35.923094  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:36.096145  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:36.100029  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:36.409771  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:36.422324  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:36.451045  408313 pod_ready.go:92] pod "coredns-5d78c9869d-l5bkd" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.451071  408313 pod_ready.go:81] duration metric: took 1.06293025s waiting for pod "coredns-5d78c9869d-l5bkd" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.451094  408313 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.457691  408313 pod_ready.go:92] pod "etcd-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.457737  408313 pod_ready.go:81] duration metric: took 6.633631ms waiting for pod "etcd-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.457753  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.464459  408313 pod_ready.go:92] pod "kube-apiserver-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.464495  408313 pod_ready.go:81] duration metric: took 6.728433ms waiting for pod "kube-apiserver-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.464510  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.471169  408313 pod_ready.go:92] pod "kube-controller-manager-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.471195  408313 pod_ready.go:81] duration metric: took 6.668561ms waiting for pod "kube-controller-manager-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.471210  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cvrjb" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.479648  408313 pod_ready.go:92] pod "kube-proxy-cvrjb" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.479678  408313 pod_ready.go:81] duration metric: took 8.459096ms waiting for pod "kube-proxy-cvrjb" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.479690  408313 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.595635  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:36.601138  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:36.876196  408313 pod_ready.go:92] pod "kube-scheduler-addons-735995" in "kube-system" namespace has status "Ready":"True"
	I0605 17:32:36.876222  408313 pod_ready.go:81] duration metric: took 396.523416ms waiting for pod "kube-scheduler-addons-735995" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.876234  408313 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace to be "Ready" ...
	I0605 17:32:36.907288  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:36.922809  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:37.095720  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:37.099903  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:37.407523  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:37.425372  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:37.599428  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:37.609182  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:37.908212  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:37.923165  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:38.094942  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:38.099224  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:38.407704  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:38.421619  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:38.596223  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:38.599413  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:38.907190  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:38.921925  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:39.095387  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:39.101035  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:39.288853  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:39.409467  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:39.427430  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:39.598540  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:39.600903  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:39.908502  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:39.926854  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:40.106845  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:40.111152  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:40.408316  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:40.423344  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:40.599261  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:40.606480  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:40.908616  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:40.936753  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:41.099495  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:41.103387  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:41.289763  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:41.407978  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:41.424349  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:41.595910  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:41.601500  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:41.907758  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:41.950984  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:42.107486  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:42.108165  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:42.407353  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:42.427903  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:42.605224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:42.617242  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:42.908709  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:42.922913  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:43.106256  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:43.107762  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:43.408476  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:43.426102  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:43.595363  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:43.602746  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:43.781648  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:43.907757  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:43.921900  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:44.096362  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:44.101970  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:44.408523  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:44.422468  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:44.607553  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:44.608949  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:44.907432  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:44.922316  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:45.100016  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:45.102245  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:45.409035  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:45.422191  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:45.596302  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:45.599725  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:45.782197  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:45.907199  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:45.922403  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:46.097423  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:46.102621  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:46.408150  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:46.422399  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:46.595955  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:46.600668  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:46.907148  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:46.921632  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:47.095833  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:47.099385  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:47.407320  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:47.421675  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:47.595866  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:47.600054  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:47.907704  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:47.940955  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:48.095368  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:48.099744  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:48.283880  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:48.408737  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:48.423501  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:48.595106  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:48.599026  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:48.907910  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:48.922257  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:49.096232  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:49.100386  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:49.407452  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:49.425749  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:49.599162  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:49.605140  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:49.919827  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:49.927839  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:50.096296  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:50.102646  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:50.408891  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:50.421875  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:50.595717  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:50.598968  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:50.780736  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:50.907224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:50.922179  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:51.097213  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:51.102504  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:51.407167  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:51.423339  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:51.598986  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:51.602205  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:51.912814  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:51.925975  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:52.096320  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:52.101803  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:52.435689  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:52.439340  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:52.597868  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:52.606277  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:52.789157  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:52.906924  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:52.921738  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:53.096025  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:53.100717  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:53.406967  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:53.422385  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:53.595572  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:53.600540  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:53.907190  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:53.922062  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:54.116952  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:54.117863  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:54.408256  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:54.430969  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:54.598198  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:54.607487  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:54.910192  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:54.922306  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:55.097039  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:55.101912  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:55.281009  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:55.408266  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:55.423306  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:55.597112  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:55.602571  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:55.907695  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:55.921909  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:56.096533  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:56.103639  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:56.415548  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:56.424490  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:56.608574  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:56.612889  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:56.908097  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:56.924258  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:57.096770  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:57.102078  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:57.283650  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:57.407574  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:57.422879  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:57.597895  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:57.602032  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:57.931613  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:57.954925  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:58.097718  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:58.105745  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:58.410496  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:58.430333  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:58.599381  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:58.619806  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:58.907224  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:58.927558  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:59.103084  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:59.113194  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:59.408138  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:59.423867  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:32:59.616486  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:32:59.616952  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:32:59.784359  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:32:59.910185  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:32:59.923690  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:00.133311  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:00.158375  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:00.413822  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:00.423220  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:00.595309  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:00.599633  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:00.907040  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:00.921862  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:01.095728  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:01.100562  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:01.409214  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:01.422392  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:01.596026  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:01.599654  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:01.907555  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:01.928422  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:02.100801  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:02.101692  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:02.291392  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:33:02.407374  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:02.421373  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:02.596015  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:02.599084  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:02.911740  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:02.923961  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:03.096184  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:03.101025  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:03.407982  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:03.424471  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:03.608441  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:03.608988  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:03.908070  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:03.921213  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:04.114021  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:04.114947  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:04.407296  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:04.422128  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:04.597386  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:04.600602  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:04.781420  408313 pod_ready.go:102] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"False"
	I0605 17:33:04.909372  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:04.924286  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:05.095576  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:05.099825  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:05.407233  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:05.422900  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:05.696713  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:05.703963  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:05.782951  408313 pod_ready.go:92] pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace has status "Ready":"True"
	I0605 17:33:05.783040  408313 pod_ready.go:81] duration metric: took 28.906795415s waiting for pod "metrics-server-844d8db974-66p4n" in "kube-system" namespace to be "Ready" ...
	I0605 17:33:05.783110  408313 pod_ready.go:38] duration metric: took 30.466261695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:33:05.783182  408313 api_server.go:52] waiting for apiserver process to appear ...
	I0605 17:33:05.783252  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 17:33:05.783388  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 17:33:05.857947  408313 cri.go:88] found id: "fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:05.857971  408313 cri.go:88] found id: ""
	I0605 17:33:05.857979  408313 logs.go:284] 1 containers: [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4]
	I0605 17:33:05.858034  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:05.864216  408313 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 17:33:05.864289  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 17:33:05.911877  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:05.924110  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:05.959001  408313 cri.go:88] found id: "6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:05.959023  408313 cri.go:88] found id: ""
	I0605 17:33:05.959030  408313 logs.go:284] 1 containers: [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576]
	I0605 17:33:05.959089  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:05.967513  408313 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 17:33:05.967581  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 17:33:06.064292  408313 cri.go:88] found id: "508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:06.064313  408313 cri.go:88] found id: ""
	I0605 17:33:06.064321  408313 logs.go:284] 1 containers: [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e]
	I0605 17:33:06.064384  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.070139  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 17:33:06.070223  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 17:33:06.096580  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:06.101763  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:06.151182  408313 cri.go:88] found id: "afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:06.151208  408313 cri.go:88] found id: ""
	I0605 17:33:06.151216  408313 logs.go:284] 1 containers: [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719]
	I0605 17:33:06.151274  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.164593  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 17:33:06.164667  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 17:33:06.255481  408313 cri.go:88] found id: "5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:06.255506  408313 cri.go:88] found id: ""
	I0605 17:33:06.255515  408313 logs.go:284] 1 containers: [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098]
	I0605 17:33:06.255573  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.261760  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 17:33:06.261842  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 17:33:06.325892  408313 cri.go:88] found id: "07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:06.325916  408313 cri.go:88] found id: ""
	I0605 17:33:06.325924  408313 logs.go:284] 1 containers: [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a]
	I0605 17:33:06.325986  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.331847  408313 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 17:33:06.331932  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 17:33:06.391508  408313 cri.go:88] found id: "9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:06.391532  408313 cri.go:88] found id: ""
	I0605 17:33:06.391540  408313 logs.go:284] 1 containers: [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c]
	I0605 17:33:06.391608  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:06.397823  408313 logs.go:123] Gathering logs for dmesg ...
	I0605 17:33:06.397850  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 17:33:06.408217  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:06.422742  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:06.434206  408313 logs.go:123] Gathering logs for kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] ...
	I0605 17:33:06.434235  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:06.504496  408313 logs.go:123] Gathering logs for kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] ...
	I0605 17:33:06.504535  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:06.611938  408313 logs.go:123] Gathering logs for kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] ...
	I0605 17:33:06.611964  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:06.622262  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:06.623613  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:06.726420  408313 logs.go:123] Gathering logs for kubelet ...
	I0605 17:33:06.726498  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 17:33:06.839638  408313 logs.go:123] Gathering logs for describe nodes ...
	I0605 17:33:06.839716  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0605 17:33:06.921432  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:06.925458  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:07.118944  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:07.120401  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:07.131091  408313 logs.go:123] Gathering logs for kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] ...
	I0605 17:33:07.131165  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:07.193090  408313 logs.go:123] Gathering logs for etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] ...
	I0605 17:33:07.193124  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:07.253207  408313 logs.go:123] Gathering logs for coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] ...
	I0605 17:33:07.253242  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:07.301768  408313 logs.go:123] Gathering logs for kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] ...
	I0605 17:33:07.301799  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:07.349948  408313 logs.go:123] Gathering logs for CRI-O ...
	I0605 17:33:07.349975  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 17:33:07.407453  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:07.437783  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:07.442783  408313 logs.go:123] Gathering logs for container status ...
	I0605 17:33:07.442812  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 17:33:07.596200  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:07.599692  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:07.907616  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:07.921733  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:08.095161  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:08.099259  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:08.407427  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:08.424727  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:08.596190  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:08.600808  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:08.908874  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:08.926153  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:09.102385  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:09.108464  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:09.415140  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:09.424777  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:09.598054  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:09.608497  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:09.911617  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:09.923503  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:10.005907  408313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 17:33:10.030437  408313 api_server.go:72] duration metric: took 1m5.286025078s to wait for apiserver process to appear ...
	I0605 17:33:10.030513  408313 api_server.go:88] waiting for apiserver healthz status ...
	I0605 17:33:10.030561  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 17:33:10.030649  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 17:33:10.126626  408313 cri.go:88] found id: "fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:10.126732  408313 cri.go:88] found id: ""
	I0605 17:33:10.126754  408313 logs.go:284] 1 containers: [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4]
	I0605 17:33:10.126858  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.130113  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:10.133464  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:10.144318  408313 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 17:33:10.144441  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 17:33:10.240553  408313 cri.go:88] found id: "6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:10.240621  408313 cri.go:88] found id: ""
	I0605 17:33:10.240644  408313 logs.go:284] 1 containers: [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576]
	I0605 17:33:10.240741  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.248610  408313 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 17:33:10.248732  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 17:33:10.320727  408313 cri.go:88] found id: "508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:10.320795  408313 cri.go:88] found id: ""
	I0605 17:33:10.320818  408313 logs.go:284] 1 containers: [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e]
	I0605 17:33:10.320914  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.328965  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 17:33:10.329075  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 17:33:10.410567  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:10.425252  408313 cri.go:88] found id: "afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:10.425277  408313 cri.go:88] found id: ""
	I0605 17:33:10.425286  408313 logs.go:284] 1 containers: [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719]
	I0605 17:33:10.425340  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.431131  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:10.437784  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 17:33:10.437880  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 17:33:10.490921  408313 cri.go:88] found id: "5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:10.490947  408313 cri.go:88] found id: ""
	I0605 17:33:10.490956  408313 logs.go:284] 1 containers: [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098]
	I0605 17:33:10.491009  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.496345  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 17:33:10.496421  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 17:33:10.575552  408313 cri.go:88] found id: "07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:10.575624  408313 cri.go:88] found id: ""
	I0605 17:33:10.575647  408313 logs.go:284] 1 containers: [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a]
	I0605 17:33:10.575764  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.586747  408313 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 17:33:10.586900  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 17:33:10.597271  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:10.608133  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:10.668737  408313 cri.go:88] found id: "9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:10.668812  408313 cri.go:88] found id: ""
	I0605 17:33:10.668842  408313 logs.go:284] 1 containers: [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c]
	I0605 17:33:10.668953  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:10.675739  408313 logs.go:123] Gathering logs for kubelet ...
	I0605 17:33:10.675822  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 17:33:10.791701  408313 logs.go:123] Gathering logs for kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] ...
	I0605 17:33:10.791776  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:10.907600  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:10.924315  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:10.962455  408313 logs.go:123] Gathering logs for kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] ...
	I0605 17:33:10.962509  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:11.078060  408313 logs.go:123] Gathering logs for kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] ...
	I0605 17:33:11.078165  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:11.103752  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:11.106961  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:11.199528  408313 logs.go:123] Gathering logs for kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] ...
	I0605 17:33:11.199612  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:11.286935  408313 logs.go:123] Gathering logs for container status ...
	I0605 17:33:11.287009  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 17:33:11.407829  408313 logs.go:123] Gathering logs for dmesg ...
	I0605 17:33:11.407860  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 17:33:11.413183  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:11.422671  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:11.459401  408313 logs.go:123] Gathering logs for describe nodes ...
	I0605 17:33:11.459433  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0605 17:33:11.622016  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:11.623440  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:11.807758  408313 logs.go:123] Gathering logs for etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] ...
	I0605 17:33:11.807795  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:11.912632  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:11.916599  408313 logs.go:123] Gathering logs for coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] ...
	I0605 17:33:11.916657  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:11.929985  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:12.005580  408313 logs.go:123] Gathering logs for kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] ...
	I0605 17:33:12.005681  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:12.098310  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:12.105012  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:12.162827  408313 logs.go:123] Gathering logs for CRI-O ...
	I0605 17:33:12.162869  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 17:33:12.411736  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:12.441851  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:12.595839  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:12.600103  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:12.908823  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:12.930258  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:13.097302  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:13.102543  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:13.411491  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:13.421971  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:13.596292  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0605 17:33:13.600850  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:13.912095  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:13.921157  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:14.098446  408313 kapi.go:107] duration metric: took 1m4.520051227s to wait for kubernetes.io/minikube-addons=registry ...
	I0605 17:33:14.102935  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:14.407251  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:14.421510  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:14.599765  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:14.778009  408313 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0605 17:33:14.787208  408313 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0605 17:33:14.788640  408313 api_server.go:141] control plane version: v1.27.2
	I0605 17:33:14.788666  408313 api_server.go:131] duration metric: took 4.758132634s to wait for apiserver health ...
	I0605 17:33:14.788675  408313 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 17:33:14.788697  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 17:33:14.788764  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 17:33:14.836296  408313 cri.go:88] found id: "fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:14.836320  408313 cri.go:88] found id: ""
	I0605 17:33:14.836328  408313 logs.go:284] 1 containers: [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4]
	I0605 17:33:14.836383  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:14.843068  408313 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 17:33:14.843146  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 17:33:14.910072  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:14.921974  408313 cri.go:88] found id: "6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:14.922040  408313 cri.go:88] found id: ""
	I0605 17:33:14.922062  408313 logs.go:284] 1 containers: [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576]
	I0605 17:33:14.922155  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:14.924466  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:14.933871  408313 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 17:33:14.933991  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 17:33:14.998285  408313 cri.go:88] found id: "508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:14.998355  408313 cri.go:88] found id: ""
	I0605 17:33:14.998378  408313 logs.go:284] 1 containers: [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e]
	I0605 17:33:14.998471  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.010750  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 17:33:15.010915  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 17:33:15.073940  408313 cri.go:88] found id: "afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:15.073967  408313 cri.go:88] found id: ""
	I0605 17:33:15.073976  408313 logs.go:284] 1 containers: [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719]
	I0605 17:33:15.074051  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.081778  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 17:33:15.081860  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 17:33:15.104736  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:15.137616  408313 cri.go:88] found id: "5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:15.137693  408313 cri.go:88] found id: ""
	I0605 17:33:15.137715  408313 logs.go:284] 1 containers: [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098]
	I0605 17:33:15.137801  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.143124  408313 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 17:33:15.143246  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 17:33:15.195016  408313 cri.go:88] found id: "07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:15.195087  408313 cri.go:88] found id: ""
	I0605 17:33:15.195108  408313 logs.go:284] 1 containers: [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a]
	I0605 17:33:15.195177  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.200202  408313 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 17:33:15.200311  408313 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 17:33:15.250276  408313 cri.go:88] found id: "9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:15.250350  408313 cri.go:88] found id: ""
	I0605 17:33:15.250372  408313 logs.go:284] 1 containers: [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c]
	I0605 17:33:15.250505  408313 ssh_runner.go:195] Run: which crictl
	I0605 17:33:15.255205  408313 logs.go:123] Gathering logs for etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] ...
	I0605 17:33:15.255267  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576"
	I0605 17:33:15.308041  408313 logs.go:123] Gathering logs for coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] ...
	I0605 17:33:15.308073  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e"
	I0605 17:33:15.353184  408313 logs.go:123] Gathering logs for kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] ...
	I0605 17:33:15.353214  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a"
	I0605 17:33:15.414643  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:15.421179  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:15.424187  408313 logs.go:123] Gathering logs for dmesg ...
	I0605 17:33:15.424226  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 17:33:15.454948  408313 logs.go:123] Gathering logs for describe nodes ...
	I0605 17:33:15.455006  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0605 17:33:15.605124  408313 logs.go:123] Gathering logs for kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] ...
	I0605 17:33:15.605158  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4"
	I0605 17:33:15.610491  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:15.698728  408313 logs.go:123] Gathering logs for kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] ...
	I0605 17:33:15.698767  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719"
	I0605 17:33:15.773743  408313 logs.go:123] Gathering logs for kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] ...
	I0605 17:33:15.773773  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098"
	I0605 17:33:15.819420  408313 logs.go:123] Gathering logs for kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] ...
	I0605 17:33:15.819479  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c"
	I0605 17:33:15.888621  408313 logs.go:123] Gathering logs for CRI-O ...
	I0605 17:33:15.888652  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 17:33:15.908064  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:15.926986  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:16.002389  408313 logs.go:123] Gathering logs for container status ...
	I0605 17:33:16.002438  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 17:33:16.101689  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:16.118266  408313 logs.go:123] Gathering logs for kubelet ...
	I0605 17:33:16.118340  408313 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 17:33:16.409430  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:16.423394  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:16.600614  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:16.907753  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:16.921436  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:17.100848  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:17.407822  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0605 17:33:17.421926  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:17.604654  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:17.909577  408313 kapi.go:107] duration metric: took 1m4.525409852s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0605 17:33:17.913053  408313 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-735995 cluster.
	I0605 17:33:17.915736  408313 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0605 17:33:17.918049  408313 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0605 17:33:17.933820  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:18.099815  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:18.447066  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:18.601533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:18.789526  408313 system_pods.go:59] 17 kube-system pods found
	I0605 17:33:18.789565  408313 system_pods.go:61] "coredns-5d78c9869d-l5bkd" [4f797771-1160-4aee-90d6-6318e79fb0f1] Running
	I0605 17:33:18.789573  408313 system_pods.go:61] "csi-hostpath-attacher-0" [865791b9-c9c5-4006-a914-13a73b32e398] Running
	I0605 17:33:18.789578  408313 system_pods.go:61] "csi-hostpath-resizer-0" [83dcd2f8-2a1e-4450-8d18-5e5a86bda005] Running
	I0605 17:33:18.789589  408313 system_pods.go:61] "csi-hostpathplugin-jsp8k" [17cdb2d4-6cb2-4b5d-b466-8fac66c26119] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0605 17:33:18.789596  408313 system_pods.go:61] "etcd-addons-735995" [9cd50488-e14a-41b5-ab75-bd0e60fb5629] Running
	I0605 17:33:18.789606  408313 system_pods.go:61] "kindnet-n94t6" [636d16c5-d20f-4ce5-9bcd-6785b44e7099] Running
	I0605 17:33:18.789612  408313 system_pods.go:61] "kube-apiserver-addons-735995" [8607dcea-cddf-40ed-9bd2-0f3c8cfb5a93] Running
	I0605 17:33:18.789622  408313 system_pods.go:61] "kube-controller-manager-addons-735995" [7b02cda4-0e3f-4011-b5c7-e2992fea324c] Running
	I0605 17:33:18.789631  408313 system_pods.go:61] "kube-ingress-dns-minikube" [caadaba2-93ce-42a1-8339-fc8d5e28c44a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0605 17:33:18.789641  408313 system_pods.go:61] "kube-proxy-cvrjb" [6339c547-f0d2-473e-8384-9a2a6edb94c1] Running
	I0605 17:33:18.789646  408313 system_pods.go:61] "kube-scheduler-addons-735995" [a17a0f19-922c-4753-b9e3-ace693ab8799] Running
	I0605 17:33:18.789652  408313 system_pods.go:61] "metrics-server-844d8db974-66p4n" [da2b3efb-e47f-430a-b8c7-e9c926140c32] Running
	I0605 17:33:18.789662  408313 system_pods.go:61] "registry-d94xj" [3b4e0792-a45f-41f1-911a-36c1609f1e26] Running
	I0605 17:33:18.789669  408313 system_pods.go:61] "registry-proxy-6c5b7" [542106f4-ef94-45fe-8183-768a7d7b500f] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0605 17:33:18.789680  408313 system_pods.go:61] "snapshot-controller-75bbb956b9-4wct2" [bebcc4f3-f9bf-4eef-ab2b-834954867d13] Running
	I0605 17:33:18.789820  408313 system_pods.go:61] "snapshot-controller-75bbb956b9-x66wp" [8f84c95d-a6f1-4a7e-a332-fd7fb635f8f0] Running
	I0605 17:33:18.789832  408313 system_pods.go:61] "storage-provisioner" [ed27cc6b-2dcd-4877-9e7a-e9064bd85070] Running
	I0605 17:33:18.789838  408313 system_pods.go:74] duration metric: took 4.001157632s to wait for pod list to return data ...
	I0605 17:33:18.789847  408313 default_sa.go:34] waiting for default service account to be created ...
	I0605 17:33:18.794363  408313 default_sa.go:45] found service account: "default"
	I0605 17:33:18.794390  408313 default_sa.go:55] duration metric: took 4.533756ms for default service account to be created ...
	I0605 17:33:18.794414  408313 system_pods.go:116] waiting for k8s-apps to be running ...
	I0605 17:33:18.807542  408313 system_pods.go:86] 17 kube-system pods found
	I0605 17:33:18.807629  408313 system_pods.go:89] "coredns-5d78c9869d-l5bkd" [4f797771-1160-4aee-90d6-6318e79fb0f1] Running
	I0605 17:33:18.807651  408313 system_pods.go:89] "csi-hostpath-attacher-0" [865791b9-c9c5-4006-a914-13a73b32e398] Running
	I0605 17:33:18.807673  408313 system_pods.go:89] "csi-hostpath-resizer-0" [83dcd2f8-2a1e-4450-8d18-5e5a86bda005] Running
	I0605 17:33:18.807709  408313 system_pods.go:89] "csi-hostpathplugin-jsp8k" [17cdb2d4-6cb2-4b5d-b466-8fac66c26119] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0605 17:33:18.807739  408313 system_pods.go:89] "etcd-addons-735995" [9cd50488-e14a-41b5-ab75-bd0e60fb5629] Running
	I0605 17:33:18.807762  408313 system_pods.go:89] "kindnet-n94t6" [636d16c5-d20f-4ce5-9bcd-6785b44e7099] Running
	I0605 17:33:18.807784  408313 system_pods.go:89] "kube-apiserver-addons-735995" [8607dcea-cddf-40ed-9bd2-0f3c8cfb5a93] Running
	I0605 17:33:18.807818  408313 system_pods.go:89] "kube-controller-manager-addons-735995" [7b02cda4-0e3f-4011-b5c7-e2992fea324c] Running
	I0605 17:33:18.807846  408313 system_pods.go:89] "kube-ingress-dns-minikube" [caadaba2-93ce-42a1-8339-fc8d5e28c44a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0605 17:33:18.807866  408313 system_pods.go:89] "kube-proxy-cvrjb" [6339c547-f0d2-473e-8384-9a2a6edb94c1] Running
	I0605 17:33:18.807886  408313 system_pods.go:89] "kube-scheduler-addons-735995" [a17a0f19-922c-4753-b9e3-ace693ab8799] Running
	I0605 17:33:18.807926  408313 system_pods.go:89] "metrics-server-844d8db974-66p4n" [da2b3efb-e47f-430a-b8c7-e9c926140c32] Running
	I0605 17:33:18.807951  408313 system_pods.go:89] "registry-d94xj" [3b4e0792-a45f-41f1-911a-36c1609f1e26] Running
	I0605 17:33:18.807973  408313 system_pods.go:89] "registry-proxy-6c5b7" [542106f4-ef94-45fe-8183-768a7d7b500f] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0605 17:33:18.807993  408313 system_pods.go:89] "snapshot-controller-75bbb956b9-4wct2" [bebcc4f3-f9bf-4eef-ab2b-834954867d13] Running
	I0605 17:33:18.808026  408313 system_pods.go:89] "snapshot-controller-75bbb956b9-x66wp" [8f84c95d-a6f1-4a7e-a332-fd7fb635f8f0] Running
	I0605 17:33:18.808048  408313 system_pods.go:89] "storage-provisioner" [ed27cc6b-2dcd-4877-9e7a-e9064bd85070] Running
	I0605 17:33:18.808069  408313 system_pods.go:126] duration metric: took 13.646331ms to wait for k8s-apps to be running ...
	I0605 17:33:18.808089  408313 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 17:33:18.808174  408313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:33:18.832843  408313 system_svc.go:56] duration metric: took 24.745066ms WaitForService to wait for kubelet.
	I0605 17:33:18.832911  408313 kubeadm.go:581] duration metric: took 1m14.088505235s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0605 17:33:18.832948  408313 node_conditions.go:102] verifying NodePressure condition ...
	I0605 17:33:18.838814  408313 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 17:33:18.838892  408313 node_conditions.go:123] node cpu capacity is 2
	I0605 17:33:18.838921  408313 node_conditions.go:105] duration metric: took 5.948497ms to run NodePressure ...
	I0605 17:33:18.838945  408313 start.go:228] waiting for startup goroutines ...
	I0605 17:33:18.921833  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:19.103343  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:19.421651  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:19.600490  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:19.922768  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:20.100290  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:20.423871  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:20.609193  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:20.923164  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:21.100546  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:21.422245  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:21.600465  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:21.921451  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:22.100501  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:22.421573  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:22.599691  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:22.924589  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:23.100855  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:23.422513  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:23.599469  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:23.921708  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:24.100559  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:24.427423  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:24.600193  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:24.922436  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:25.102872  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:25.422552  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:25.602566  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:25.924218  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:26.100282  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:26.421456  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:26.601083  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:26.923326  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:27.100639  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:27.427489  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:27.601444  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:27.924112  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:28.100643  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:28.429669  408313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0605 17:33:28.600286  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:28.921804  408313 kapi.go:107] duration metric: took 1m19.030849226s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0605 17:33:29.099808  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:29.600375  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:30.102074  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:30.600327  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:31.100469  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:31.599173  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:32.100128  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:32.600687  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:33.099687  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:33.599325  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:34.100210  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:34.599408  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:35.099389  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:35.600069  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:36.100039  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:36.602282  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:37.100337  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:37.600744  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:38.099395  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:38.599713  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:39.100884  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:39.600235  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:40.100672  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:40.600533  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:41.099494  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:41.599991  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:42.101750  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:42.599254  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:43.099858  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:43.600226  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:44.102189  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:44.600025  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:45.105291  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:45.599213  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:46.101417  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:46.600521  408313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0605 17:33:47.115604  408313 kapi.go:107] duration metric: took 1m37.532593975s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0605 17:33:47.118885  408313 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0605 17:33:47.122146  408313 addons.go:499] enable addons completed in 1m43.212870954s: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner inspektor-gadget metrics-server volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0605 17:33:47.122260  408313 start.go:233] waiting for cluster config update ...
	I0605 17:33:47.122324  408313 start.go:242] writing updated cluster config ...
	I0605 17:33:47.122737  408313 ssh_runner.go:195] Run: rm -f paused
	I0605 17:33:47.213588  408313 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0605 17:33:47.216456  408313 out.go:177] * Done! kubectl is now configured to use "addons-735995" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.121770513Z" level=info msg="Removed container d3299b4a213132234dad91ed449649d6b6f36741524f0d7b9ac3cc52a21149fb: ingress-nginx/ingress-nginx-admission-create-vh2bl/create" id=812d9934-6ae3-4e49-a9d3-aabd16aa949e name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.123756625Z" level=info msg="Stopping pod sandbox: 140b2bf0ae50bc83a704e0ec5e8b6a59552f844416634e0b9241bc72e304de23" id=41f0edcb-b7ea-430f-9989-cd59cc2b8fa3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.123809507Z" level=info msg="Stopped pod sandbox (already stopped): 140b2bf0ae50bc83a704e0ec5e8b6a59552f844416634e0b9241bc72e304de23" id=41f0edcb-b7ea-430f-9989-cd59cc2b8fa3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.124191366Z" level=info msg="Removing pod sandbox: 140b2bf0ae50bc83a704e0ec5e8b6a59552f844416634e0b9241bc72e304de23" id=dd2076ff-994b-4d85-ac18-57cc3ae8957b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.132998649Z" level=info msg="Removed pod sandbox: 140b2bf0ae50bc83a704e0ec5e8b6a59552f844416634e0b9241bc72e304de23" id=dd2076ff-994b-4d85-ac18-57cc3ae8957b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.133552439Z" level=info msg="Stopping pod sandbox: 0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a" id=189a8408-a85b-47a9-ba5a-4df59b8d3487 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.133595770Z" level=info msg="Stopped pod sandbox (already stopped): 0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a" id=189a8408-a85b-47a9-ba5a-4df59b8d3487 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.134069880Z" level=info msg="Removing pod sandbox: 0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a" id=aeb99896-6e49-4e2f-a168-825e56e11eab name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.143562703Z" level=info msg="Removed pod sandbox: 0fe1ad5c24c16cd4f5e14b1c54641776655d180fa037f29a51945d46b167d54a" id=aeb99896-6e49-4e2f-a168-825e56e11eab name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.144524674Z" level=info msg="Stopping pod sandbox: 228551d5508c745fa96cb994be288e8ab9ddad1cd1b295c6708709a0a76d0b38" id=93f99c26-0313-4f4c-8934-519e17bbf4ff name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.144567595Z" level=info msg="Stopped pod sandbox (already stopped): 228551d5508c745fa96cb994be288e8ab9ddad1cd1b295c6708709a0a76d0b38" id=93f99c26-0313-4f4c-8934-519e17bbf4ff name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.144951472Z" level=info msg="Removing pod sandbox: 228551d5508c745fa96cb994be288e8ab9ddad1cd1b295c6708709a0a76d0b38" id=d642e419-8066-4820-b3cc-bce668d84216 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.153059062Z" level=info msg="Removed pod sandbox: 228551d5508c745fa96cb994be288e8ab9ddad1cd1b295c6708709a0a76d0b38" id=d642e419-8066-4820-b3cc-bce668d84216 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.153606411Z" level=info msg="Stopping pod sandbox: 718d9f207e466abc73b2d61fd2db001d4b300d5a9809595ad5cdc1bffc487b0d" id=488d8dac-a76f-4ad6-90d0-c4e3c591ad43 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.153659424Z" level=info msg="Stopped pod sandbox (already stopped): 718d9f207e466abc73b2d61fd2db001d4b300d5a9809595ad5cdc1bffc487b0d" id=488d8dac-a76f-4ad6-90d0-c4e3c591ad43 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.154015371Z" level=info msg="Removing pod sandbox: 718d9f207e466abc73b2d61fd2db001d4b300d5a9809595ad5cdc1bffc487b0d" id=c265792e-59a0-4dc2-aa98-92e5a96a52d6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.162309530Z" level=info msg="Removed pod sandbox: 718d9f207e466abc73b2d61fd2db001d4b300d5a9809595ad5cdc1bffc487b0d" id=c265792e-59a0-4dc2-aa98-92e5a96a52d6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.162820981Z" level=info msg="Stopping pod sandbox: e183366bb99829a73dec085e18e0b0177e96377ceed9276c7202df636045e3da" id=c8183936-1564-495d-a66b-97e2042fc2a7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.162862261Z" level=info msg="Stopped pod sandbox (already stopped): e183366bb99829a73dec085e18e0b0177e96377ceed9276c7202df636045e3da" id=c8183936-1564-495d-a66b-97e2042fc2a7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.163229170Z" level=info msg="Removing pod sandbox: e183366bb99829a73dec085e18e0b0177e96377ceed9276c7202df636045e3da" id=f881d5b3-b748-4969-a181-f32c58e3f564 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.170842720Z" level=info msg="Removed pod sandbox: e183366bb99829a73dec085e18e0b0177e96377ceed9276c7202df636045e3da" id=f881d5b3-b748-4969-a181-f32c58e3f564 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.171518200Z" level=info msg="Stopping pod sandbox: 4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287" id=37f3223c-3e25-4af1-a86a-9d8e7df798f6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.171554927Z" level=info msg="Stopped pod sandbox (already stopped): 4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287" id=37f3223c-3e25-4af1-a86a-9d8e7df798f6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.172032564Z" level=info msg="Removing pod sandbox: 4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287" id=c7b24cc2-0c8e-4095-ad08-ed78bc9fefba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 17:36:51 addons-735995 crio[892]: time="2023-06-05 17:36:51.181350002Z" level=info msg="Removed pod sandbox: 4463b0e15f3a4ef35ff4e0d5c2f0ab27bf286be45fba5b4c02c674c9dee66287" id=c7b24cc2-0c8e-4095-ad08-ed78bc9fefba name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	da2f434356694       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                                             6 seconds ago       Exited              hello-world-app                          2                   fb983610bdce8       hello-world-app-65bdb79f98-p8crq
	61a742d7a586b       docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328                                              2 minutes ago       Running             nginx                                    0                   be994479502e2       nginx
	f7687af86ab66       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	e284fa186f3e5       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	5dfcd8e197d26       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	2f02f9d7c68e6       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	68a1c67a10186       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	af6a6d78dabf2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 3 minutes ago       Running             gcp-auth                                 0                   a6754fbd60916       gcp-auth-58478865f7-bplzt
	a2703a1890ed2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago       Running             csi-external-health-monitor-controller   0                   7a445c2c1dd90       csi-hostpathplugin-jsp8k
	151015d91fcee       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   218c8c1eb9fe1       snapshot-controller-75bbb956b9-4wct2
	6c75e5ca1ca65       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   1f3d27bcd5d4f       snapshot-controller-75bbb956b9-x66wp
	49af45f52a3a0       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago       Running             csi-attacher                             0                   3715291fb45dc       csi-hostpath-attacher-0
	2a5a86c856dcb       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              4 minutes ago       Running             csi-resizer                              0                   a0fc9d4f1bf66       csi-hostpath-resizer-0
	2fd39d328683f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   33bff1d31bec0       storage-provisioner
	508b9734603b5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             4 minutes ago       Running             coredns                                  0                   e8b33b73cb473       coredns-5d78c9869d-l5bkd
	5ae220af0a775       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0                                                                             4 minutes ago       Running             kube-proxy                               0                   3112b46084f89       kube-proxy-cvrjb
	9c48170f08535       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                                             4 minutes ago       Running             kindnet-cni                              0                   80b97f2f1fabc       kindnet-n94t6
	afd5bfa3324d4       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840                                                                             5 minutes ago       Running             kube-scheduler                           0                   fc54f1742e68d       kube-scheduler-addons-735995
	07e73cacba03d       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4                                                                             5 minutes ago       Running             kube-controller-manager                  0                   c6a0355f54da6       kube-controller-manager-addons-735995
	6ae1a1fe127bd       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                                             5 minutes ago       Running             etcd                                     0                   3d0e65e686eee       etcd-addons-735995
	fbb09dc418a04       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae                                                                             5 minutes ago       Running             kube-apiserver                           0                   7252c9b135e68       kube-apiserver-addons-735995
	
	* 
	* ==> coredns [508b9734603b55feaa43272b0073de9cad7b1b6a81c5c5d33e6d9a201e32764e] <==
	* [INFO] 10.244.0.17:59030 - 35447 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076825s
	[INFO] 10.244.0.17:59030 - 412 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058839s
	[INFO] 10.244.0.17:59030 - 31541 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058191s
	[INFO] 10.244.0.17:59030 - 14376 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041715s
	[INFO] 10.244.0.17:59030 - 62096 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001125565s
	[INFO] 10.244.0.17:59030 - 55535 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000895993s
	[INFO] 10.244.0.17:59030 - 51492 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069629s
	[INFO] 10.244.0.17:56240 - 7951 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109161s
	[INFO] 10.244.0.17:56240 - 35937 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049962s
	[INFO] 10.244.0.17:53340 - 42667 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044176s
	[INFO] 10.244.0.17:53340 - 19294 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048558s
	[INFO] 10.244.0.17:53340 - 56075 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048673s
	[INFO] 10.244.0.17:56240 - 54859 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071336s
	[INFO] 10.244.0.17:53340 - 59059 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049936s
	[INFO] 10.244.0.17:53340 - 18944 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043463s
	[INFO] 10.244.0.17:56240 - 30956 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101227s
	[INFO] 10.244.0.17:56240 - 21738 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067619s
	[INFO] 10.244.0.17:53340 - 49215 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097108s
	[INFO] 10.244.0.17:56240 - 1800 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043012s
	[INFO] 10.244.0.17:53340 - 31222 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001279633s
	[INFO] 10.244.0.17:56240 - 63275 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000890881s
	[INFO] 10.244.0.17:56240 - 47075 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001060884s
	[INFO] 10.244.0.17:53340 - 19109 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001003965s
	[INFO] 10.244.0.17:56240 - 37436 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005961s
	[INFO] 10.244.0.17:53340 - 52790 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106905s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-735995
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-735995
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=addons-735995
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T17_31_51_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-735995
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-735995"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 17:31:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-735995
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 17:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 17:34:24 +0000   Mon, 05 Jun 2023 17:32:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-735995
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 3684da2e34114758b7496e92a206a799
	  System UUID:                3a3b4a4e-f4f2-44f2-83a0-39a8ef621246
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-p8crq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gcp-auth                    gcp-auth-58478865f7-bplzt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 coredns-5d78c9869d-l5bkd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m52s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 csi-hostpathplugin-jsp8k                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-addons-735995                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-n94t6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m52s
	  kube-system                 kube-apiserver-addons-735995             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-addons-735995    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-proxy-cvrjb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-scheduler-addons-735995             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 snapshot-controller-75bbb956b9-4wct2     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 snapshot-controller-75bbb956b9-x66wp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m47s  kube-proxy       
	  Normal  Starting                 5m5s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s   kubelet          Node addons-735995 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s   kubelet          Node addons-735995 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s   kubelet          Node addons-735995 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m53s  node-controller  Node addons-735995 event: Registered Node addons-735995 in Controller
	  Normal  NodeReady                4m21s  kubelet          Node addons-735995 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000769] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000df8e4d15
	[  +0.001037] FS-Cache: N-key=[8] '7acfc90000000000'
	[  +0.002865] FS-Cache: Duplicate cookie detected
	[  +0.000688] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000048956af5
	[  +0.001033] FS-Cache: O-key=[8] '7acfc90000000000'
	[  +0.000786] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000f0906f1d
	[  +0.001019] FS-Cache: N-key=[8] '7acfc90000000000'
	[  +2.251230] FS-Cache: Duplicate cookie detected
	[  +0.000694] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000954] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000b0ed0c6e
	[  +0.001110] FS-Cache: O-key=[8] '79cfc90000000000'
	[  +0.000701] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000df8e4d15
	[  +0.001059] FS-Cache: N-key=[8] '79cfc90000000000'
	[  +0.398785] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001039] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000006ec03a4d
	[  +0.001323] FS-Cache: O-key=[8] '82cfc90000000000'
	[  +0.000865] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000006926a378
	[  +0.001086] FS-Cache: N-key=[8] '82cfc90000000000'
	[Jun 5 16:26] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [6ae1a1fe127bd661dac3a989b53b07ecc8c87963050aa56c480fe5529e6e9576] <==
	* {"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-05T17:31:43.447Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-06-05T17:31:43.448Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-06-05T17:31:44.227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-06-05T17:31:44.232Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-735995 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-05T17:31:44.232Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T17:31:44.233Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-05T17:31:44.235Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T17:31:44.243Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-06-05T17:31:44.248Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:31:44.251Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:31:44.252Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:32:07.870Z","caller":"traceutil/trace.go:171","msg":"trace[1942935828] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"111.703651ms","start":"2023-06-05T17:32:07.759Z","end":"2023-06-05T17:32:07.870Z","steps":["trace[1942935828] 'process raft request'  (duration: 63.224955ms)","trace[1942935828] 'compare'  (duration: 48.188662ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [af6a6d78dabf25e339e1e02c6946c7e224b0c37221091a858f164ffbdadca047] <==
	* 2023/06/05 17:33:17 GCP Auth Webhook started!
	2023/06/05 17:33:57 Ready to marshal response ...
	2023/06/05 17:33:57 Ready to write response ...
	2023/06/05 17:34:10 Ready to marshal response ...
	2023/06/05 17:34:10 Ready to write response ...
	2023/06/05 17:36:30 Ready to marshal response ...
	2023/06/05 17:36:30 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  17:36:56 up  2:19,  0 users,  load average: 0.49, 1.80, 2.82
	Linux addons-735995 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [9c48170f085351c3a7f574418c8410fb3d7103364c99e043b0791c499c77551c] <==
	* I0605 17:34:54.353297       1 main.go:227] handling current node
	I0605 17:35:04.359107       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:04.359141       1 main.go:227] handling current node
	I0605 17:35:14.363885       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:14.363912       1 main.go:227] handling current node
	I0605 17:35:24.372175       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:24.372206       1 main.go:227] handling current node
	I0605 17:35:34.382658       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:34.382685       1 main.go:227] handling current node
	I0605 17:35:44.387214       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:44.387240       1 main.go:227] handling current node
	I0605 17:35:54.399260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:35:54.399291       1 main.go:227] handling current node
	I0605 17:36:04.406679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:04.406785       1 main.go:227] handling current node
	I0605 17:36:14.415458       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:14.415487       1 main.go:227] handling current node
	I0605 17:36:24.426431       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:24.426556       1 main.go:227] handling current node
	I0605 17:36:34.430910       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:34.430938       1 main.go:227] handling current node
	I0605 17:36:44.444033       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:44.444382       1 main.go:227] handling current node
	I0605 17:36:54.455893       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:36:54.456131       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [fbb09dc418a042916e06860b5d931c1d7caab12033c268aecb50913bce7e19a4] <==
	* I0605 17:33:47.385594       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0605 17:34:03.763228       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0605 17:34:03.781493       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0605 17:34:04.800420       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0605 17:34:06.624501       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0605 17:34:06.624534       1 handler_proxy.go:100] no RequestInfo found in the context
	E0605 17:34:06.624566       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0605 17:34:06.624580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0605 17:34:06.649951       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0605 17:34:10.143766       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0605 17:34:10.595357       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.98.188.175]
	E0605 17:35:06.624955       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0605 17:35:06.624989       1 handler_proxy.go:100] no RequestInfo found in the context
	E0605 17:35:06.625029       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0605 17:35:06.625037       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0605 17:36:30.664732       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.98.252.150]
	E0605 17:36:46.605287       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400f3311a0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400e626050), ResponseWriter:(*httpsnoop.rw)(0x400e626050), Flusher:(*httpsnoop.rw)(0x400e626050), CloseNotifier:(*httpsnoop.rw)(0x400e626050), Pusher:(*httpsnoop.rw)(0x400e626050)}}, encoder:(*versioning.codec)(0x400faefd60), memAllocator:(*runtime.Allocator)(0x4004b72fc0)})
	I0605 17:36:47.598919       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0605 17:36:47.598977       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0605 17:36:47.599340       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0605 17:36:47.599385       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0605 17:36:47.609470       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0605 17:36:47.609532       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [07e73cacba03db0664d4406d5c97cb31c3d167933f90fe8bdbca532ee3690d5a] <==
	* E0605 17:34:04.802643       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:34:06.098049       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:06.098111       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:34:09.115556       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:09.115670       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:34:12.917658       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:12.917696       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0605 17:34:13.904657       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0605 17:34:22.041006       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:22.041145       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0605 17:34:32.673277       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0605 17:34:32.673453       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 17:34:33.143432       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0605 17:34:33.143500       1 shared_informer.go:318] Caches are synced for garbage collector
	W0605 17:34:45.637592       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:34:45.637631       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:35:27.543322       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:35:27.543440       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0605 17:36:25.742957       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0605 17:36:25.742996       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0605 17:36:30.401405       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0605 17:36:30.435755       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-p8crq"
	I0605 17:36:47.180964       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0605 17:36:47.205042       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	I0605 17:36:48.345738       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [5ae220af0a775dcc865e6ed5c2def62c96a40ac6ccba06f7b9f031f50fff8098] <==
	* I0605 17:32:03.938108       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0605 17:32:03.938609       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0605 17:32:03.938703       1 server_others.go:551] "Using iptables proxy"
	I0605 17:32:08.171982       1 server_others.go:190] "Using iptables Proxier"
	I0605 17:32:08.176007       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0605 17:32:08.187518       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0605 17:32:08.187632       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0605 17:32:08.187729       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0605 17:32:08.244193       1 server.go:657] "Version info" version="v1.27.2"
	I0605 17:32:08.244227       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 17:32:08.245927       1 config.go:188] "Starting service config controller"
	I0605 17:32:08.246005       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0605 17:32:08.246127       1 config.go:97] "Starting endpoint slice config controller"
	I0605 17:32:08.246143       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0605 17:32:08.821717       1 config.go:315] "Starting node config controller"
	I0605 17:32:08.821818       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0605 17:32:08.861429       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0605 17:32:08.863146       1 shared_informer.go:318] Caches are synced for service config
	I0605 17:32:08.922792       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [afd5bfa3324d443676e77f32b7b6d60d9dcebc796fec6bd1f82ab4e046106719] <==
	* W0605 17:31:48.061857       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0605 17:31:48.062634       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0605 17:31:48.061888       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0605 17:31:48.062726       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0605 17:31:48.061917       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0605 17:31:48.062810       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0605 17:31:48.061956       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0605 17:31:48.062901       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0605 17:31:48.062026       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0605 17:31:48.062988       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0605 17:31:48.062065       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0605 17:31:48.063081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0605 17:31:48.062206       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0605 17:31:48.063172       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0605 17:31:48.065682       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0605 17:31:48.065798       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0605 17:31:48.066153       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0605 17:31:48.066228       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0605 17:31:48.066325       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0605 17:31:48.066379       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0605 17:31:48.066479       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0605 17:31:48.066517       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0605 17:31:48.066606       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0605 17:31:48.066657       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0605 17:31:49.155440       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 05 17:36:48 addons-735995 kubelet[1365]: I0605 17:36:48.863314    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b7fd0f3b-716f-42a8-a85b-be1e192fffcc path="/var/lib/kubelet/pods/b7fd0f3b-716f-42a8-a85b-be1e192fffcc/volumes"
	Jun 05 17:36:49 addons-735995 kubelet[1365]: I0605 17:36:49.550289    1365 scope.go:115] "RemoveContainer" containerID="731580839d70d7f54b776dbae38fb88ef3d1f3a00730c1b5eba7c8cf91013673"
	Jun 05 17:36:49 addons-735995 kubelet[1365]: I0605 17:36:49.550531    1365 scope.go:115] "RemoveContainer" containerID="da2f434356694844791ce5fd37ed239ceaea7f15eacffaefec8d2a69fb7581bf"
	Jun 05 17:36:49 addons-735995 kubelet[1365]: E0605 17:36:49.550798    1365 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-p8crq_default(240725f3-82d9-4bd7-84c1-6346dd19713b)\"" pod="default/hello-world-app-65bdb79f98-p8crq" podUID=240725f3-82d9-4bd7-84c1-6346dd19713b
	Jun 05 17:36:51 addons-735995 kubelet[1365]: I0605 17:36:51.034630    1365 scope.go:115] "RemoveContainer" containerID="74ef54dfc5e27486d656b599e416f73b134a7826fb52b1881d66feda74db1c6a"
	Jun 05 17:36:51 addons-735995 kubelet[1365]: W0605 17:36:51.045167    1365 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.046325    1365 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee, memory: /docker/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/system.slice/kubelet.service"
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.066779    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1675eab0921f8a86dfdc01f9d5061d9cbcadf23cd18c5fd455019ded68b9b992/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1675eab0921f8a86dfdc01f9d5061d9cbcadf23cd18c5fd455019ded68b9b992/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_registry-proxy-6c5b7_542106f4-ef94-45fe-8183-768a7d7b500f/registry-proxy/4.log" to get inode usage: stat /var/log/pods/kube-system_registry-proxy-6c5b7_542106f4-ef94-45fe-8183-768a7d7b500f/registry-proxy/4.log: no such file or directory
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.073179    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b58d39bd1b0850a1f252fde22d0ec6175ce6cc8b01a2c1e7ebdd817b68ef7770/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b58d39bd1b0850a1f252fde22d0ec6175ce6cc8b01a2c1e7ebdd817b68ef7770/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-28whg_2051caff-c050-4203-82f8-4c2e6ab93ee4/patch/2.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-28whg_2051caff-c050-4203-82f8-4c2e6ab93ee4/patch/2.log: no such file or directory
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.074448    1365 manager.go:1106] Failed to create existing container: /docker/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/crio/crio-4a473ef6f07a4511405ead46ef993f8cd083735ddd58078248f3077c55ad71c0: Error finding container 4a473ef6f07a4511405ead46ef993f8cd083735ddd58078248f3077c55ad71c0: Status 404 returned error can't find the container with id 4a473ef6f07a4511405ead46ef993f8cd083735ddd58078248f3077c55ad71c0
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.074821    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a3e72c4ccd65af417f82df581c3bdf4567ca5c3d71d9a4670ab0c14fa515d7ae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a3e72c4ccd65af417f82df581c3bdf4567ca5c3d71d9a4670ab0c14fa515d7ae/diff: no such file or directory, extraDiskErr: <nil>
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.075747    1365 manager.go:1106] Failed to create existing container: /crio/crio-4a473ef6f07a4511405ead46ef993f8cd083735ddd58078248f3077c55ad71c0: Error finding container 4a473ef6f07a4511405ead46ef993f8cd083735ddd58078248f3077c55ad71c0: Status 404 returned error can't find the container with id 4a473ef6f07a4511405ead46ef993f8cd083735ddd58078248f3077c55ad71c0
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.076854    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d163645dcd99956ebf1d91cdd5212947f9acdf193d1a60fd5c0d81e6c54efe7f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d163645dcd99956ebf1d91cdd5212947f9acdf193d1a60fd5c0d81e6c54efe7f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-ingress-dns-minikube_caadaba2-93ce-42a1-8339-fc8d5e28c44a/minikube-ingress-dns/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-ingress-dns-minikube_caadaba2-93ce-42a1-8339-fc8d5e28c44a/minikube-ingress-dns/5.log: no such file or directory
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.077083    1365 manager.go:1106] Failed to create existing container: /docker/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/crio/crio-74ef54dfc5e27486d656b599e416f73b134a7826fb52b1881d66feda74db1c6a: Error finding container 74ef54dfc5e27486d656b599e416f73b134a7826fb52b1881d66feda74db1c6a: Status 404 returned error can't find the container with id 74ef54dfc5e27486d656b599e416f73b134a7826fb52b1881d66feda74db1c6a
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.077287    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a3e72c4ccd65af417f82df581c3bdf4567ca5c3d71d9a4670ab0c14fa515d7ae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a3e72c4ccd65af417f82df581c3bdf4567ca5c3d71d9a4670ab0c14fa515d7ae/diff: no such file or directory, extraDiskErr: <nil>
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.079347    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7204b27eff159fa33675b0b9e3233dc2bd78ea02da2e66a3f6c0162fc3995205/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7204b27eff159fa33675b0b9e3233dc2bd78ea02da2e66a3f6c0162fc3995205/diff: no such file or directory, extraDiskErr: <nil>
	Jun 05 17:36:51 addons-735995 kubelet[1365]: W0605 17:36:51.080219    1365 container.go:485] Failed to get RecentStats("/crio/crio-700089cbbcba4117d62677731fd36bb5a40428b5f629e1a7d7baa6d659fb001a") while determining the next housekeeping: unable to find data in memory cache
	Jun 05 17:36:51 addons-735995 kubelet[1365]: I0605 17:36:51.080475    1365 scope.go:115] "RemoveContainer" containerID="d3299b4a213132234dad91ed449649d6b6f36741524f0d7b9ac3cc52a21149fb"
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.081778    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3686cafbbaee85b0532853a80c2c00b1bfa5643fdfa629db05ad871c47ff4046/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3686cafbbaee85b0532853a80c2c00b1bfa5643fdfa629db05ad871c47ff4046/diff: no such file or directory, extraDiskErr: <nil>
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.082365    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-create-vh2bl_b7fd0f3b-716f-42a8-a85b-be1e192fffcc/create/0.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-create-vh2bl_b7fd0f3b-716f-42a8-a85b-be1e192fffcc/create/0.log: no such file or directory
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.086843    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3686cafbbaee85b0532853a80c2c00b1bfa5643fdfa629db05ad871c47ff4046/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3686cafbbaee85b0532853a80c2c00b1bfa5643fdfa629db05ad871c47ff4046/diff: no such file or directory, extraDiskErr: <nil>
	Jun 05 17:36:51 addons-735995 kubelet[1365]: E0605 17:36:51.089702    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7204b27eff159fa33675b0b9e3233dc2bd78ea02da2e66a3f6c0162fc3995205/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7204b27eff159fa33675b0b9e3233dc2bd78ea02da2e66a3f6c0162fc3995205/diff: no such file or directory, extraDiskErr: <nil>
	Jun 05 17:36:52 addons-735995 kubelet[1365]: W0605 17:36:52.537342    1365 container.go:586] Failed to update stats for container "/docker/d36a4170624d2128051787a4ed3b0d271f29d554102cc078e778209e72087eee/crio/crio-140b2bf0ae50bc83a704e0ec5e8b6a59552f844416634e0b9241bc72e304de23": unable to determine device info for dir: /var/lib/containers/storage/overlay/fb5527f631e6fb8453726b969e99b0a6eaed83587f8d2dfa764820e7d6a3bc7d/diff: stat failed on /var/lib/containers/storage/overlay/fb5527f631e6fb8453726b969e99b0a6eaed83587f8d2dfa764820e7d6a3bc7d/diff with error: no such file or directory, continuing to push stats
	Jun 05 17:36:52 addons-735995 kubelet[1365]: W0605 17:36:52.982352    1365 container.go:586] Failed to update stats for container "/crio/crio-140b2bf0ae50bc83a704e0ec5e8b6a59552f844416634e0b9241bc72e304de23": unable to determine device info for dir: /var/lib/containers/storage/overlay/fb5527f631e6fb8453726b969e99b0a6eaed83587f8d2dfa764820e7d6a3bc7d/diff: stat failed on /var/lib/containers/storage/overlay/fb5527f631e6fb8453726b969e99b0a6eaed83587f8d2dfa764820e7d6a3bc7d/diff with error: no such file or directory, continuing to push stats
	Jun 05 17:36:56 addons-735995 kubelet[1365]: W0605 17:36:56.172351    1365 container.go:586] Failed to update stats for container "/crio/crio-718d9f207e466abc73b2d61fd2db001d4b300d5a9809595ad5cdc1bffc487b0d": unable to determine device info for dir: /var/lib/containers/storage/overlay/9e01436fc5bc4cc1716aa291faf27c3a03adcabb8936a6c4cd3c4db91ea19ca7/diff: stat failed on /var/lib/containers/storage/overlay/9e01436fc5bc4cc1716aa291faf27c3a03adcabb8936a6c4cd3c4db91ea19ca7/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [2fd39d328683f03e182fccf5ceecc92e929532641c49212c34a60ad5f49c1998] <==
	* I0605 17:32:35.827758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0605 17:32:35.855643       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0605 17:32:35.855745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0605 17:32:35.883761       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0605 17:32:35.884218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-735995_2cc398ac-5fbd-4dab-8e61-87cf5b348e7a!
	I0605 17:32:35.886509       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48e22060-8807-4fde-933f-4dc9cf03e09c", APIVersion:"v1", ResourceVersion:"804", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-735995_2cc398ac-5fbd-4dab-8e61-87cf5b348e7a became leader
	I0605 17:32:35.985319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-735995_2cc398ac-5fbd-4dab-8e61-87cf5b348e7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-735995 -n addons-735995
helpers_test.go:261: (dbg) Run:  kubectl --context addons-735995 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-980425 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-980425 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.449299014s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-980425 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-980425 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5645d663-f153-436b-9c5a-92902f33ea02] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5645d663-f153-436b-9c5a-92902f33ea02] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.013396546s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0605 17:46:49.165636  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.171010  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.181328  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.201571  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.241828  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.322095  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.482449  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:49.802997  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:50.443941  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:51.724935  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:54.285564  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:46:59.405705  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:47:09.645936  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-980425 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.69252768s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-980425 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0605 17:47:30.126180  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.042066069s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons disable ingress-dns --alsologtostderr -v=1: (2.244610339s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons disable ingress --alsologtostderr -v=1: (7.329716567s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-980425
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-980425:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "126823df667b72aedd0cb6222f7fa726b6ce35aaceda2c8a0b66831c55f7023b",
	        "Created": "2023-06-05T17:43:23.030523818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 435954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T17:43:23.374852566Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/126823df667b72aedd0cb6222f7fa726b6ce35aaceda2c8a0b66831c55f7023b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/126823df667b72aedd0cb6222f7fa726b6ce35aaceda2c8a0b66831c55f7023b/hostname",
	        "HostsPath": "/var/lib/docker/containers/126823df667b72aedd0cb6222f7fa726b6ce35aaceda2c8a0b66831c55f7023b/hosts",
	        "LogPath": "/var/lib/docker/containers/126823df667b72aedd0cb6222f7fa726b6ce35aaceda2c8a0b66831c55f7023b/126823df667b72aedd0cb6222f7fa726b6ce35aaceda2c8a0b66831c55f7023b-json.log",
	        "Name": "/ingress-addon-legacy-980425",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-980425:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-980425",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e946810bd6f6cebeb04b1455be733e90d1ae57a35812361a4ae6e1af2c95801-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e946810bd6f6cebeb04b1455be733e90d1ae57a35812361a4ae6e1af2c95801/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e946810bd6f6cebeb04b1455be733e90d1ae57a35812361a4ae6e1af2c95801/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e946810bd6f6cebeb04b1455be733e90d1ae57a35812361a4ae6e1af2c95801/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-980425",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-980425/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-980425",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-980425",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-980425",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "606bcd4136cf243f086f8eb6a5b8cf8f45e035371dbedc1e3a64b3d265e23d65",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/606bcd4136cf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-980425": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "126823df667b",
	                        "ingress-addon-legacy-980425"
	                    ],
	                    "NetworkID": "ffd51213ec78bebe45e8d3c3b351aaa5b7e2296e321a455cb5176663393b6b40",
	                    "EndpointID": "e96b123aa8faf60177934f4b2332a00d08c9c6506292a80e897b629d9a5efbee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-980425 -n ingress-addon-legacy-980425
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-980425 logs -n 25: (1.441725417s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-083977                                                      | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-083977 image ls                                             | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	| image          | functional-083977 image load --daemon                                  | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-083977               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977 image ls                                             | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	| image          | functional-083977 image save                                           | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-083977               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977 image rm                                             | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-083977               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977 image ls                                             | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	| image          | functional-083977 image load                                           | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977 image ls                                             | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	| image          | functional-083977 image save --daemon                                  | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-083977               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977                                                      | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977                                                      | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-083977 ssh pgrep                                            | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-083977                                                      | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977 image build -t                                       | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | localhost/my-image:functional-083977                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-083977                                                      | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-083977 image ls                                             | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	| delete         | -p functional-083977                                                   | functional-083977           | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:42 UTC |
	| start          | -p ingress-addon-legacy-980425                                         | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:42 UTC | 05 Jun 23 17:44 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-980425                                            | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:44 UTC | 05 Jun 23 17:44 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-980425                                            | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:44 UTC | 05 Jun 23 17:44 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-980425                                            | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:45 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-980425 ip                                         | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:47 UTC | 05 Jun 23 17:47 UTC |
	| addons         | ingress-addon-legacy-980425                                            | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:47 UTC | 05 Jun 23 17:47 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-980425                                            | ingress-addon-legacy-980425 | jenkins | v1.30.1 | 05 Jun 23 17:47 UTC | 05 Jun 23 17:47 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 17:42:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 17:42:58.587330  435505 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:42:58.587486  435505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:42:58.587496  435505 out.go:309] Setting ErrFile to fd 2...
	I0605 17:42:58.587502  435505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:42:58.587664  435505 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:42:58.588128  435505 out.go:303] Setting JSON to false
	I0605 17:42:58.589560  435505 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8711,"bootTime":1685978268,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:42:58.589663  435505 start.go:137] virtualization:  
	I0605 17:42:58.593313  435505 out.go:177] * [ingress-addon-legacy-980425] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:42:58.595202  435505 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 17:42:58.595268  435505 notify.go:220] Checking for updates...
	I0605 17:42:58.600363  435505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:42:58.602451  435505 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:42:58.605341  435505 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:42:58.607437  435505 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 17:42:58.609816  435505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 17:42:58.612443  435505 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:42:58.636469  435505 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:42:58.636569  435505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:42:58.713790  435505 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-06-05 17:42:58.702960007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:42:58.713903  435505 docker.go:294] overlay module found
	I0605 17:42:58.718061  435505 out.go:177] * Using the docker driver based on user configuration
	I0605 17:42:58.720290  435505 start.go:297] selected driver: docker
	I0605 17:42:58.720312  435505 start.go:875] validating driver "docker" against <nil>
	I0605 17:42:58.720341  435505 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 17:42:58.720979  435505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:42:58.780726  435505 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-06-05 17:42:58.771190088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:42:58.780883  435505 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0605 17:42:58.781118  435505 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 17:42:58.783493  435505 out.go:177] * Using Docker driver with root privileges
	I0605 17:42:58.785701  435505 cni.go:84] Creating CNI manager for ""
	I0605 17:42:58.785727  435505 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:42:58.785743  435505 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0605 17:42:58.785758  435505 start_flags.go:319] config:
	{Name:ingress-addon-legacy-980425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-980425 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:42:58.788392  435505 out.go:177] * Starting control plane node ingress-addon-legacy-980425 in cluster ingress-addon-legacy-980425
	I0605 17:42:58.790413  435505 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:42:58.792927  435505 out.go:177] * Pulling base image ...
	I0605 17:42:58.795287  435505 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0605 17:42:58.795370  435505 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:42:58.812711  435505 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 17:42:58.812738  435505 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 17:42:58.873270  435505 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0605 17:42:58.873305  435505 cache.go:57] Caching tarball of preloaded images
	I0605 17:42:58.873505  435505 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0605 17:42:58.875737  435505 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0605 17:42:58.877749  435505 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:42:59.001980  435505 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0605 17:43:15.003238  435505 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:43:15.003366  435505 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:43:16.140791  435505 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0605 17:43:16.141178  435505 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/config.json ...
	I0605 17:43:16.141213  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/config.json: {Name:mkb96f190e990bce36ef002fb870f1c4a7c1a350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:16.141768  435505 cache.go:195] Successfully downloaded all kic artifacts
	I0605 17:43:16.141797  435505 start.go:364] acquiring machines lock for ingress-addon-legacy-980425: {Name:mkb575629b60a697d4a0d9eeb5289290cb9403e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 17:43:16.142328  435505 start.go:368] acquired machines lock for "ingress-addon-legacy-980425" in 518.392µs
	I0605 17:43:16.142358  435505 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-980425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-980425 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:43:16.142435  435505 start.go:125] createHost starting for "" (driver="docker")
	I0605 17:43:16.144887  435505 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0605 17:43:16.145191  435505 start.go:159] libmachine.API.Create for "ingress-addon-legacy-980425" (driver="docker")
	I0605 17:43:16.145227  435505 client.go:168] LocalClient.Create starting
	I0605 17:43:16.145298  435505 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem
	I0605 17:43:16.145337  435505 main.go:141] libmachine: Decoding PEM data...
	I0605 17:43:16.145356  435505 main.go:141] libmachine: Parsing certificate...
	I0605 17:43:16.145439  435505 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem
	I0605 17:43:16.145462  435505 main.go:141] libmachine: Decoding PEM data...
	I0605 17:43:16.145479  435505 main.go:141] libmachine: Parsing certificate...
	I0605 17:43:16.145847  435505 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-980425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0605 17:43:16.163419  435505 cli_runner.go:211] docker network inspect ingress-addon-legacy-980425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0605 17:43:16.163509  435505 network_create.go:281] running [docker network inspect ingress-addon-legacy-980425] to gather additional debugging logs...
	I0605 17:43:16.163530  435505 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-980425
	W0605 17:43:16.181032  435505 cli_runner.go:211] docker network inspect ingress-addon-legacy-980425 returned with exit code 1
	I0605 17:43:16.181064  435505 network_create.go:284] error running [docker network inspect ingress-addon-legacy-980425]: docker network inspect ingress-addon-legacy-980425: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-980425 not found
	I0605 17:43:16.181079  435505 network_create.go:286] output of [docker network inspect ingress-addon-legacy-980425]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-980425 not found
	
	** /stderr **
	I0605 17:43:16.181138  435505 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:43:16.199246  435505 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000052880}
	I0605 17:43:16.199283  435505 network_create.go:123] attempt to create docker network ingress-addon-legacy-980425 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0605 17:43:16.199340  435505 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-980425 ingress-addon-legacy-980425
	I0605 17:43:16.277344  435505 network_create.go:107] docker network ingress-addon-legacy-980425 192.168.49.0/24 created
	I0605 17:43:16.277376  435505 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-980425" container
	I0605 17:43:16.277449  435505 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0605 17:43:16.294304  435505 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-980425 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-980425 --label created_by.minikube.sigs.k8s.io=true
	I0605 17:43:16.313198  435505 oci.go:103] Successfully created a docker volume ingress-addon-legacy-980425
	I0605 17:43:16.313295  435505 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-980425-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-980425 --entrypoint /usr/bin/test -v ingress-addon-legacy-980425:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib
	I0605 17:43:17.904513  435505 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-980425-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-980425 --entrypoint /usr/bin/test -v ingress-addon-legacy-980425:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib: (1.591170769s)
	I0605 17:43:17.904545  435505 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-980425
	I0605 17:43:17.904569  435505 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0605 17:43:17.904589  435505 kic.go:190] Starting extracting preloaded images to volume ...
	I0605 17:43:17.904678  435505 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-980425:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir
	I0605 17:43:22.949592  435505 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-980425:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir: (5.044854107s)
	I0605 17:43:22.949624  435505 kic.go:199] duration metric: took 5.045032 seconds to extract preloaded images to volume
	W0605 17:43:22.949764  435505 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0605 17:43:22.949875  435505 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0605 17:43:23.013893  435505 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-980425 --name ingress-addon-legacy-980425 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-980425 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-980425 --network ingress-addon-legacy-980425 --ip 192.168.49.2 --volume ingress-addon-legacy-980425:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f
	I0605 17:43:23.384122  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Running}}
	I0605 17:43:23.411037  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Status}}
	I0605 17:43:23.436793  435505 cli_runner.go:164] Run: docker exec ingress-addon-legacy-980425 stat /var/lib/dpkg/alternatives/iptables
	I0605 17:43:23.530679  435505 oci.go:144] the created container "ingress-addon-legacy-980425" has a running status.
	I0605 17:43:23.530713  435505 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa...
	I0605 17:43:23.984122  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0605 17:43:23.984194  435505 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0605 17:43:24.021301  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Status}}
	I0605 17:43:24.057539  435505 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0605 17:43:24.057559  435505 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-980425 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0605 17:43:24.189685  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Status}}
	I0605 17:43:24.221715  435505 machine.go:88] provisioning docker machine ...
	I0605 17:43:24.221753  435505 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-980425"
	I0605 17:43:24.221823  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:24.244327  435505 main.go:141] libmachine: Using SSH client type: native
	I0605 17:43:24.244806  435505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0605 17:43:24.244825  435505 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-980425 && echo "ingress-addon-legacy-980425" | sudo tee /etc/hostname
	I0605 17:43:24.448157  435505 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-980425
	
	I0605 17:43:24.448303  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:24.487891  435505 main.go:141] libmachine: Using SSH client type: native
	I0605 17:43:24.488366  435505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0605 17:43:24.488395  435505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-980425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-980425/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-980425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 17:43:24.641410  435505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 17:43:24.641436  435505 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 17:43:24.641464  435505 ubuntu.go:177] setting up certificates
	I0605 17:43:24.641473  435505 provision.go:83] configureAuth start
	I0605 17:43:24.641533  435505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-980425
	I0605 17:43:24.666080  435505 provision.go:138] copyHostCerts
	I0605 17:43:24.666116  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 17:43:24.666146  435505 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 17:43:24.666152  435505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 17:43:24.666212  435505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 17:43:24.666283  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 17:43:24.666299  435505 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 17:43:24.666303  435505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 17:43:24.666372  435505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 17:43:24.666427  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 17:43:24.666454  435505 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 17:43:24.666458  435505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 17:43:24.666500  435505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 17:43:24.666559  435505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-980425 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-980425]
	I0605 17:43:25.391958  435505 provision.go:172] copyRemoteCerts
	I0605 17:43:25.392027  435505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 17:43:25.392073  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:25.410198  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:43:25.510804  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0605 17:43:25.510865  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 17:43:25.540064  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0605 17:43:25.540131  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 17:43:25.570287  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0605 17:43:25.570347  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0605 17:43:25.598760  435505 provision.go:86] duration metric: configureAuth took 957.269576ms
	I0605 17:43:25.598789  435505 ubuntu.go:193] setting minikube options for container-runtime
	I0605 17:43:25.599012  435505 config.go:182] Loaded profile config "ingress-addon-legacy-980425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0605 17:43:25.599132  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:25.617173  435505 main.go:141] libmachine: Using SSH client type: native
	I0605 17:43:25.617603  435505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0605 17:43:25.617624  435505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 17:43:25.902704  435505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 17:43:25.902726  435505 machine.go:91] provisioned docker machine in 1.680984748s
	I0605 17:43:25.902736  435505 client.go:171] LocalClient.Create took 9.757498064s
	I0605 17:43:25.902747  435505 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-980425" took 9.75755632s
	I0605 17:43:25.902755  435505 start.go:300] post-start starting for "ingress-addon-legacy-980425" (driver="docker")
	I0605 17:43:25.902761  435505 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 17:43:25.902831  435505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 17:43:25.902874  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:25.922382  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:43:26.024332  435505 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 17:43:26.029366  435505 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 17:43:26.029412  435505 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 17:43:26.029424  435505 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 17:43:26.029430  435505 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 17:43:26.029446  435505 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 17:43:26.029530  435505 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 17:43:26.029641  435505 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 17:43:26.029653  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> /etc/ssl/certs/4078132.pem
	I0605 17:43:26.029777  435505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 17:43:26.042079  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 17:43:26.073654  435505 start.go:303] post-start completed in 170.883917ms
	I0605 17:43:26.074067  435505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-980425
	I0605 17:43:26.092619  435505 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/config.json ...
	I0605 17:43:26.092909  435505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:43:26.092963  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:26.111322  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:43:26.206523  435505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 17:43:26.212545  435505 start.go:128] duration metric: createHost completed in 10.070093247s
	I0605 17:43:26.212571  435505 start.go:83] releasing machines lock for "ingress-addon-legacy-980425", held for 10.07022855s
	I0605 17:43:26.212650  435505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-980425
	I0605 17:43:26.231455  435505 ssh_runner.go:195] Run: cat /version.json
	I0605 17:43:26.231514  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:26.231790  435505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 17:43:26.231850  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:43:26.258237  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:43:26.261925  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:43:26.352633  435505 ssh_runner.go:195] Run: systemctl --version
	I0605 17:43:26.507530  435505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 17:43:26.657104  435505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 17:43:26.663069  435505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:43:26.688282  435505 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 17:43:26.688368  435505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:43:26.722691  435505 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0605 17:43:26.722716  435505 start.go:481] detecting cgroup driver to use...
	I0605 17:43:26.722749  435505 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 17:43:26.722798  435505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 17:43:26.741756  435505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 17:43:26.756245  435505 docker.go:193] disabling cri-docker service (if available) ...
	I0605 17:43:26.756307  435505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 17:43:26.772902  435505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 17:43:26.790064  435505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 17:43:26.894217  435505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 17:43:27.007671  435505 docker.go:209] disabling docker service ...
	I0605 17:43:27.007752  435505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 17:43:27.031525  435505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 17:43:27.048031  435505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 17:43:27.160544  435505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 17:43:27.273755  435505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 17:43:27.288694  435505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 17:43:27.311032  435505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0605 17:43:27.311124  435505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:43:27.324871  435505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 17:43:27.324957  435505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:43:27.338611  435505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:43:27.351805  435505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:43:27.364349  435505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 17:43:27.376009  435505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 17:43:27.386801  435505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 17:43:27.397639  435505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 17:43:27.496597  435505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 17:43:27.629663  435505 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 17:43:27.629783  435505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 17:43:27.634758  435505 start.go:549] Will wait 60s for crictl version
	I0605 17:43:27.634839  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:27.639265  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 17:43:27.687202  435505 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 17:43:27.687316  435505 ssh_runner.go:195] Run: crio --version
	I0605 17:43:27.732640  435505 ssh_runner.go:195] Run: crio --version
	I0605 17:43:27.779247  435505 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.5 ...
	I0605 17:43:27.781305  435505 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-980425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:43:27.799393  435505 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0605 17:43:27.804144  435505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:43:27.817866  435505 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0605 17:43:27.817941  435505 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:43:27.870894  435505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0605 17:43:27.870966  435505 ssh_runner.go:195] Run: which lz4
	I0605 17:43:27.875441  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0605 17:43:27.875542  435505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0605 17:43:27.880246  435505 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0605 17:43:27.880282  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0605 17:43:30.144294  435505 crio.go:444] Took 2.268786 seconds to copy over tarball
	I0605 17:43:30.144386  435505 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0605 17:43:32.927336  435505 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.782924951s)
	I0605 17:43:32.927363  435505 crio.go:451] Took 2.783040 seconds to extract the tarball
	I0605 17:43:32.927373  435505 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0605 17:43:33.012705  435505 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:43:33.056929  435505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0605 17:43:33.056955  435505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0605 17:43:33.056993  435505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:43:33.057196  435505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0605 17:43:33.057272  435505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0605 17:43:33.057337  435505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0605 17:43:33.057394  435505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0605 17:43:33.057474  435505 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0605 17:43:33.057540  435505 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0605 17:43:33.057615  435505 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0605 17:43:33.058641  435505 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0605 17:43:33.059135  435505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0605 17:43:33.059308  435505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:43:33.059537  435505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0605 17:43:33.059698  435505 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0605 17:43:33.059870  435505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0605 17:43:33.060209  435505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0605 17:43:33.060901  435505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0605 17:43:33.493639  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0605 17:43:33.509448  435505 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.509657  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0605 17:43:33.514407  435505 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.514620  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0605 17:43:33.517129  435505 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.517297  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0605 17:43:33.532117  435505 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.532374  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0605 17:43:33.541790  435505 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.542006  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0605 17:43:33.579794  435505 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.580012  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0605 17:43:33.625095  435505 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0605 17:43:33.625144  435505 cri.go:217] Removing image: registry.k8s.io/pause:3.2
	I0605 17:43:33.625206  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:33.727058  435505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0605 17:43:33.727106  435505 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0605 17:43:33.727157  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:33.727245  435505 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0605 17:43:33.727264  435505 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0605 17:43:33.727291  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:33.745463  435505 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0605 17:43:33.745505  435505 cri.go:217] Removing image: registry.k8s.io/coredns:1.6.7
	I0605 17:43:33.745559  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:33.745659  435505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0605 17:43:33.745688  435505 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0605 17:43:33.745715  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:33.745783  435505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0605 17:43:33.745799  435505 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0605 17:43:33.745822  435505 ssh_runner.go:195] Run: which crictl
	W0605 17:43:33.763099  435505 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0605 17:43:33.763277  435505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:43:33.764932  435505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0605 17:43:33.765326  435505 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0605 17:43:33.765067  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0605 17:43:33.765130  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0605 17:43:33.765159  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0605 17:43:33.765235  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0605 17:43:33.765255  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0605 17:43:33.765278  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0605 17:43:33.765592  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:34.072932  435505 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0605 17:43:34.073034  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0605 17:43:34.072957  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0605 17:43:34.073105  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0605 17:43:34.073045  435505 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:43:34.073162  435505 ssh_runner.go:195] Run: which crictl
	I0605 17:43:34.072996  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0605 17:43:34.073236  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0605 17:43:34.073252  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0605 17:43:34.073315  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0605 17:43:34.078351  435505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:43:34.130948  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0605 17:43:34.155231  435505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0605 17:43:34.155355  435505 cache_images.go:92] LoadImages completed in 1.098387443s
	W0605 17:43:34.155457  435505 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0605 17:43:34.155562  435505 ssh_runner.go:195] Run: crio config
	I0605 17:43:34.222042  435505 cni.go:84] Creating CNI manager for ""
	I0605 17:43:34.222107  435505 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:43:34.222138  435505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 17:43:34.222187  435505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-980425 NodeName:ingress-addon-legacy-980425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0605 17:43:34.222379  435505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-980425"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 17:43:34.222502  435505 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-980425 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-980425 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 17:43:34.222603  435505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0605 17:43:34.233962  435505 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 17:43:34.234109  435505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 17:43:34.245371  435505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0605 17:43:34.267298  435505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0605 17:43:34.288446  435505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0605 17:43:34.309689  435505 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0605 17:43:34.314507  435505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:43:34.328461  435505 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425 for IP: 192.168.49.2
	I0605 17:43:34.328493  435505 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:34.328692  435505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 17:43:34.328756  435505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 17:43:34.328812  435505 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.key
	I0605 17:43:34.328827  435505 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt with IP's: []
	I0605 17:43:34.766613  435505 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt ...
	I0605 17:43:34.766647  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: {Name:mk3693db41393c10752dd06f0f3b0931c85aff1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:34.767244  435505 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.key ...
	I0605 17:43:34.767261  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.key: {Name:mk4507cdd4dd503bb9749b4f2d05bbc338e53a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:34.767727  435505 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key.dd3b5fb2
	I0605 17:43:34.767749  435505 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0605 17:43:35.403139  435505 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt.dd3b5fb2 ...
	I0605 17:43:35.403175  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt.dd3b5fb2: {Name:mk3e4d2e2e03712eba52ca49881b3d2fe4e13ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:35.403391  435505 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key.dd3b5fb2 ...
	I0605 17:43:35.403418  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key.dd3b5fb2: {Name:mk9875501f8ae702a73b4f4da3f7a21a960ecc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:35.403966  435505 certs.go:337] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt
	I0605 17:43:35.404060  435505 certs.go:341] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key
	I0605 17:43:35.404122  435505 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.key
	I0605 17:43:35.404140  435505 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.crt with IP's: []
	I0605 17:43:36.488274  435505 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.crt ...
	I0605 17:43:36.488308  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.crt: {Name:mk3ce6976adf69aad72ab952807505fa4e49e5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:36.488503  435505 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.key ...
	I0605 17:43:36.488517  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.key: {Name:mk3490413343b4beef4813f993a6480e8ec848fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:43:36.489706  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0605 17:43:36.489737  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0605 17:43:36.489752  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0605 17:43:36.489767  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0605 17:43:36.489785  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0605 17:43:36.489800  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0605 17:43:36.489815  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0605 17:43:36.489829  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0605 17:43:36.489892  435505 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem (1338 bytes)
	W0605 17:43:36.489936  435505 certs.go:433] ignoring /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813_empty.pem, impossibly tiny 0 bytes
	I0605 17:43:36.489949  435505 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 17:43:36.489979  435505 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 17:43:36.490011  435505 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 17:43:36.490038  435505 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 17:43:36.490094  435505 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 17:43:36.490126  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> /usr/share/ca-certificates/4078132.pem
	I0605 17:43:36.490143  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:43:36.490161  435505 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem -> /usr/share/ca-certificates/407813.pem
	I0605 17:43:36.490889  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 17:43:36.521533  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0605 17:43:36.551592  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 17:43:36.581629  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0605 17:43:36.612214  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 17:43:36.642609  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 17:43:36.673163  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 17:43:36.702413  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 17:43:36.732456  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /usr/share/ca-certificates/4078132.pem (1708 bytes)
	I0605 17:43:36.762447  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 17:43:36.791802  435505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem --> /usr/share/ca-certificates/407813.pem (1338 bytes)
	I0605 17:43:36.820956  435505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 17:43:36.842746  435505 ssh_runner.go:195] Run: openssl version
	I0605 17:43:36.850044  435505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 17:43:36.861706  435505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:43:36.866395  435505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:43:36.866461  435505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:43:36.875470  435505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 17:43:36.887281  435505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407813.pem && ln -fs /usr/share/ca-certificates/407813.pem /etc/ssl/certs/407813.pem"
	I0605 17:43:36.899079  435505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407813.pem
	I0605 17:43:36.903849  435505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 17:43:36.904001  435505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407813.pem
	I0605 17:43:36.912907  435505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407813.pem /etc/ssl/certs/51391683.0"
	I0605 17:43:36.924907  435505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4078132.pem && ln -fs /usr/share/ca-certificates/4078132.pem /etc/ssl/certs/4078132.pem"
	I0605 17:43:36.937024  435505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4078132.pem
	I0605 17:43:36.942145  435505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 17:43:36.942216  435505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4078132.pem
	I0605 17:43:36.951031  435505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4078132.pem /etc/ssl/certs/3ec20f2e.0"
	I0605 17:43:36.963337  435505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 17:43:36.968024  435505 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:43:36.968080  435505 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-980425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-980425 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:43:36.968171  435505 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 17:43:36.968232  435505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 17:43:37.018321  435505 cri.go:88] found id: ""
	I0605 17:43:37.018447  435505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0605 17:43:37.031794  435505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0605 17:43:37.044628  435505 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0605 17:43:37.044740  435505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0605 17:43:37.056017  435505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0605 17:43:37.056063  435505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0605 17:43:37.112011  435505 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0605 17:43:37.112329  435505 kubeadm.go:322] [preflight] Running pre-flight checks
	I0605 17:43:37.167124  435505 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0605 17:43:37.167221  435505 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0605 17:43:37.167268  435505 kubeadm.go:322] OS: Linux
	I0605 17:43:37.167332  435505 kubeadm.go:322] CGROUPS_CPU: enabled
	I0605 17:43:37.167391  435505 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0605 17:43:37.167447  435505 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0605 17:43:37.167508  435505 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0605 17:43:37.167570  435505 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0605 17:43:37.167633  435505 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0605 17:43:37.259816  435505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0605 17:43:37.260036  435505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0605 17:43:37.260175  435505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0605 17:43:37.507625  435505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 17:43:37.509348  435505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 17:43:37.509628  435505 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0605 17:43:37.616360  435505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0605 17:43:37.620790  435505 out.go:204]   - Generating certificates and keys ...
	I0605 17:43:37.620972  435505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0605 17:43:37.621053  435505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0605 17:43:38.519376  435505 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0605 17:43:39.070072  435505 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0605 17:43:39.618189  435505 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0605 17:43:39.759982  435505 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0605 17:43:40.166464  435505 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0605 17:43:40.166947  435505 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-980425 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 17:43:40.452825  435505 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0605 17:43:40.453239  435505 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-980425 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0605 17:43:41.321838  435505 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0605 17:43:42.448969  435505 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0605 17:43:43.194503  435505 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0605 17:43:43.194923  435505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0605 17:43:43.449287  435505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0605 17:43:43.577607  435505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0605 17:43:44.009186  435505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0605 17:43:44.248204  435505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0605 17:43:44.249057  435505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0605 17:43:44.251409  435505 out.go:204]   - Booting up control plane ...
	I0605 17:43:44.251513  435505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0605 17:43:44.257060  435505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0605 17:43:44.262870  435505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0605 17:43:44.267882  435505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0605 17:43:44.270868  435505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0605 17:43:56.273487  435505 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002517 seconds
	I0605 17:43:56.273609  435505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0605 17:43:56.287863  435505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0605 17:43:56.814292  435505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0605 17:43:56.814443  435505 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-980425 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0605 17:43:57.322704  435505 kubeadm.go:322] [bootstrap-token] Using token: ct4fli.oc41hio806y65cg4
	I0605 17:43:57.324654  435505 out.go:204]   - Configuring RBAC rules ...
	I0605 17:43:57.324790  435505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0605 17:43:57.330709  435505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0605 17:43:57.339102  435505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0605 17:43:57.342647  435505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0605 17:43:57.353265  435505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0605 17:43:57.369278  435505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0605 17:43:57.378733  435505 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0605 17:43:57.679263  435505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0605 17:43:57.846332  435505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0605 17:43:57.846351  435505 kubeadm.go:322] 
	I0605 17:43:57.846408  435505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0605 17:43:57.846413  435505 kubeadm.go:322] 
	I0605 17:43:57.846485  435505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0605 17:43:57.846490  435505 kubeadm.go:322] 
	I0605 17:43:57.846514  435505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0605 17:43:57.846569  435505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0605 17:43:57.846617  435505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0605 17:43:57.846622  435505 kubeadm.go:322] 
	I0605 17:43:57.846671  435505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0605 17:43:57.846741  435505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0605 17:43:57.846806  435505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0605 17:43:57.846811  435505 kubeadm.go:322] 
	I0605 17:43:57.846897  435505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0605 17:43:57.846974  435505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0605 17:43:57.846982  435505 kubeadm.go:322] 
	I0605 17:43:57.847066  435505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ct4fli.oc41hio806y65cg4 \
	I0605 17:43:57.847166  435505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 \
	I0605 17:43:57.847189  435505 kubeadm.go:322]     --control-plane 
	I0605 17:43:57.847193  435505 kubeadm.go:322] 
	I0605 17:43:57.847273  435505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0605 17:43:57.847277  435505 kubeadm.go:322] 
	I0605 17:43:57.847354  435505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ct4fli.oc41hio806y65cg4 \
	I0605 17:43:57.847451  435505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 
	I0605 17:43:57.851399  435505 kubeadm.go:322] W0605 17:43:37.111381    1231 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0605 17:43:57.851626  435505 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0605 17:43:57.851733  435505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 17:43:57.851859  435505 kubeadm.go:322] W0605 17:43:44.256877    1231 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0605 17:43:57.852007  435505 kubeadm.go:322] W0605 17:43:44.262647    1231 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0605 17:43:57.852025  435505 cni.go:84] Creating CNI manager for ""
	I0605 17:43:57.852033  435505 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:43:57.854271  435505 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0605 17:43:57.856149  435505 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0605 17:43:57.861576  435505 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0605 17:43:57.861599  435505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0605 17:43:57.884964  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0605 17:43:58.347394  435505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0605 17:43:58.347557  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:43:58.347645  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d minikube.k8s.io/name=ingress-addon-legacy-980425 minikube.k8s.io/updated_at=2023_06_05T17_43_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:43:58.367187  435505 ops.go:34] apiserver oom_adj: -16
	I0605 17:43:58.485576  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:43:59.107825  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:43:59.607900  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:00.108912  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:00.607287  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:01.107616  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:01.607508  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:02.108132  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:02.607457  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:03.107313  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:03.607704  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:04.108090  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:04.607881  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:05.107409  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:05.607640  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:06.107615  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:06.607289  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:07.107312  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:07.607986  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:08.108265  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:08.607804  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:09.108006  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:09.607306  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:10.107757  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:10.607781  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:11.107772  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:11.608246  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:12.108171  435505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:44:12.259947  435505 kubeadm.go:1076] duration metric: took 13.912445452s to wait for elevateKubeSystemPrivileges.
	I0605 17:44:12.259985  435505 kubeadm.go:406] StartCluster complete in 35.291907674s
	I0605 17:44:12.260010  435505 settings.go:142] acquiring lock: {Name:mk7ddedb44759cc39266e9c612309013659bd7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:44:12.260079  435505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:44:12.260931  435505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/kubeconfig: {Name:mkb77de9bf1ac5a664886fbfefd28a762472c016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:44:12.261674  435505 kapi.go:59] client config for ingress-addon-legacy-980425: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:44:12.263157  435505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0605 17:44:12.263419  435505 config.go:182] Loaded profile config "ingress-addon-legacy-980425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0605 17:44:12.263458  435505 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0605 17:44:12.263523  435505 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-980425"
	I0605 17:44:12.263536  435505 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-980425"
	I0605 17:44:12.263597  435505 host.go:66] Checking if "ingress-addon-legacy-980425" exists ...
	I0605 17:44:12.264440  435505 cert_rotation.go:137] Starting client certificate rotation controller
	I0605 17:44:12.264483  435505 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-980425"
	I0605 17:44:12.264504  435505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-980425"
	I0605 17:44:12.264832  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Status}}
	I0605 17:44:12.265225  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Status}}
	I0605 17:44:12.300848  435505 kapi.go:59] client config for ingress-addon-legacy-980425: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:44:12.324653  435505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:44:12.326380  435505 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:44:12.326404  435505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0605 17:44:12.326477  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:44:12.350762  435505 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-980425"
	I0605 17:44:12.350807  435505 host.go:66] Checking if "ingress-addon-legacy-980425" exists ...
	I0605 17:44:12.351284  435505 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-980425 --format={{.State.Status}}
	I0605 17:44:12.367167  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:44:12.389667  435505 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0605 17:44:12.389688  435505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0605 17:44:12.389749  435505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-980425
	I0605 17:44:12.413611  435505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/ingress-addon-legacy-980425/id_rsa Username:docker}
	I0605 17:44:12.484255  435505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0605 17:44:12.567870  435505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:44:12.635290  435505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0605 17:44:12.943116  435505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-980425" context rescaled to 1 replicas
	I0605 17:44:12.943198  435505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:44:12.950194  435505 out.go:177] * Verifying Kubernetes components...
	I0605 17:44:12.952948  435505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:44:12.954256  435505 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0605 17:44:13.090983  435505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0605 17:44:13.089753  435505 kapi.go:59] client config for ingress-addon-legacy-980425: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:44:13.093129  435505 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-980425" to be "Ready" ...
	I0605 17:44:13.093325  435505 addons.go:499] enable addons completed in 829.864854ms: enabled=[storage-provisioner default-storageclass]
	I0605 17:44:15.102056  435505 node_ready.go:58] node "ingress-addon-legacy-980425" has status "Ready":"False"
	I0605 17:44:17.601856  435505 node_ready.go:58] node "ingress-addon-legacy-980425" has status "Ready":"False"
	I0605 17:44:20.102049  435505 node_ready.go:58] node "ingress-addon-legacy-980425" has status "Ready":"False"
	I0605 17:44:21.601300  435505 node_ready.go:49] node "ingress-addon-legacy-980425" has status "Ready":"True"
	I0605 17:44:21.601329  435505 node_ready.go:38] duration metric: took 8.50817449s waiting for node "ingress-addon-legacy-980425" to be "Ready" ...
	I0605 17:44:21.601339  435505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:44:21.608847  435505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-9n2md" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:23.615129  435505 pod_ready.go:102] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-05 17:44:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0605 17:44:26.114449  435505 pod_ready.go:102] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-05 17:44:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0605 17:44:28.117660  435505 pod_ready.go:102] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace has status "Ready":"False"
	I0605 17:44:30.118374  435505 pod_ready.go:102] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace has status "Ready":"False"
	I0605 17:44:32.617034  435505 pod_ready.go:102] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace has status "Ready":"False"
	I0605 17:44:34.617765  435505 pod_ready.go:102] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace has status "Ready":"False"
	I0605 17:44:37.130795  435505 pod_ready.go:92] pod "coredns-66bff467f8-9n2md" in "kube-system" namespace has status "Ready":"True"
	I0605 17:44:37.130869  435505 pod_ready.go:81] duration metric: took 15.521983018s waiting for pod "coredns-66bff467f8-9n2md" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.130899  435505 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.138902  435505 pod_ready.go:92] pod "etcd-ingress-addon-legacy-980425" in "kube-system" namespace has status "Ready":"True"
	I0605 17:44:37.138982  435505 pod_ready.go:81] duration metric: took 8.060877ms waiting for pod "etcd-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.139012  435505 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.144866  435505 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-980425" in "kube-system" namespace has status "Ready":"True"
	I0605 17:44:37.144895  435505 pod_ready.go:81] duration metric: took 5.846061ms waiting for pod "kube-apiserver-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.144916  435505 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.152181  435505 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-980425" in "kube-system" namespace has status "Ready":"True"
	I0605 17:44:37.152251  435505 pod_ready.go:81] duration metric: took 7.324295ms waiting for pod "kube-controller-manager-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.152278  435505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vtpr" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.160721  435505 pod_ready.go:92] pod "kube-proxy-6vtpr" in "kube-system" namespace has status "Ready":"True"
	I0605 17:44:37.160794  435505 pod_ready.go:81] duration metric: took 8.494659ms waiting for pod "kube-proxy-6vtpr" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.160841  435505 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.313216  435505 request.go:628] Waited for 152.272798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-980425
	I0605 17:44:37.513108  435505 request.go:628] Waited for 197.183446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-980425
	I0605 17:44:37.515965  435505 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-980425" in "kube-system" namespace has status "Ready":"True"
	I0605 17:44:37.515990  435505 pod_ready.go:81] duration metric: took 355.122431ms waiting for pod "kube-scheduler-ingress-addon-legacy-980425" in "kube-system" namespace to be "Ready" ...
	I0605 17:44:37.516003  435505 pod_ready.go:38] duration metric: took 15.914649217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:44:37.516038  435505 api_server.go:52] waiting for apiserver process to appear ...
	I0605 17:44:37.516115  435505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 17:44:37.529535  435505 api_server.go:72] duration metric: took 24.586289961s to wait for apiserver process to appear ...
	I0605 17:44:37.529566  435505 api_server.go:88] waiting for apiserver healthz status ...
	I0605 17:44:37.529582  435505 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0605 17:44:37.539084  435505 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0605 17:44:37.540018  435505 api_server.go:141] control plane version: v1.18.20
	I0605 17:44:37.540040  435505 api_server.go:131] duration metric: took 10.467293ms to wait for apiserver health ...
	I0605 17:44:37.540048  435505 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 17:44:37.712477  435505 request.go:628] Waited for 172.343228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:44:37.718763  435505 system_pods.go:59] 8 kube-system pods found
	I0605 17:44:37.718804  435505 system_pods.go:61] "coredns-66bff467f8-9n2md" [c5936e00-239b-40be-8d80-7c802b2872bc] Running
	I0605 17:44:37.718811  435505 system_pods.go:61] "etcd-ingress-addon-legacy-980425" [70a76805-155d-454a-846c-a7324b661e04] Running
	I0605 17:44:37.718817  435505 system_pods.go:61] "kindnet-gdmmw" [73ad60d3-0a26-4caa-ac91-8aaeaf8b8acd] Running
	I0605 17:44:37.718859  435505 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-980425" [1a51bfc4-9a8d-4f0f-b165-9119850f10b5] Running
	I0605 17:44:37.718880  435505 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-980425" [1afdfd4f-a7be-434a-b961-afc2ca93766d] Running
	I0605 17:44:37.718886  435505 system_pods.go:61] "kube-proxy-6vtpr" [50d405f1-aaa6-431b-a83e-2e5c40214a69] Running
	I0605 17:44:37.718891  435505 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-980425" [f487b089-cafb-4702-87f6-48bb17538189] Running
	I0605 17:44:37.718896  435505 system_pods.go:61] "storage-provisioner" [b2076fe5-561c-4a85-9c47-6e00c38c8629] Running
	I0605 17:44:37.718901  435505 system_pods.go:74] duration metric: took 178.848382ms to wait for pod list to return data ...
	I0605 17:44:37.718925  435505 default_sa.go:34] waiting for default service account to be created ...
	I0605 17:44:37.912235  435505 request.go:628] Waited for 193.224835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0605 17:44:37.914665  435505 default_sa.go:45] found service account: "default"
	I0605 17:44:37.914689  435505 default_sa.go:55] duration metric: took 195.753867ms for default service account to be created ...
	I0605 17:44:37.914699  435505 system_pods.go:116] waiting for k8s-apps to be running ...
	I0605 17:44:38.113170  435505 request.go:628] Waited for 198.368741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:44:38.119655  435505 system_pods.go:86] 8 kube-system pods found
	I0605 17:44:38.119688  435505 system_pods.go:89] "coredns-66bff467f8-9n2md" [c5936e00-239b-40be-8d80-7c802b2872bc] Running
	I0605 17:44:38.119696  435505 system_pods.go:89] "etcd-ingress-addon-legacy-980425" [70a76805-155d-454a-846c-a7324b661e04] Running
	I0605 17:44:38.119702  435505 system_pods.go:89] "kindnet-gdmmw" [73ad60d3-0a26-4caa-ac91-8aaeaf8b8acd] Running
	I0605 17:44:38.119707  435505 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-980425" [1a51bfc4-9a8d-4f0f-b165-9119850f10b5] Running
	I0605 17:44:38.119713  435505 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-980425" [1afdfd4f-a7be-434a-b961-afc2ca93766d] Running
	I0605 17:44:38.119722  435505 system_pods.go:89] "kube-proxy-6vtpr" [50d405f1-aaa6-431b-a83e-2e5c40214a69] Running
	I0605 17:44:38.119738  435505 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-980425" [f487b089-cafb-4702-87f6-48bb17538189] Running
	I0605 17:44:38.119764  435505 system_pods.go:89] "storage-provisioner" [b2076fe5-561c-4a85-9c47-6e00c38c8629] Running
	I0605 17:44:38.119774  435505 system_pods.go:126] duration metric: took 205.067945ms to wait for k8s-apps to be running ...
	I0605 17:44:38.119786  435505 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 17:44:38.119875  435505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:44:38.134189  435505 system_svc.go:56] duration metric: took 14.391656ms WaitForService to wait for kubelet.
	I0605 17:44:38.134217  435505 kubeadm.go:581] duration metric: took 25.190978842s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0605 17:44:38.134237  435505 node_conditions.go:102] verifying NodePressure condition ...
	I0605 17:44:38.312634  435505 request.go:628] Waited for 178.325904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0605 17:44:38.315648  435505 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 17:44:38.315680  435505 node_conditions.go:123] node cpu capacity is 2
	I0605 17:44:38.315693  435505 node_conditions.go:105] duration metric: took 181.450284ms to run NodePressure ...
	I0605 17:44:38.315704  435505 start.go:228] waiting for startup goroutines ...
	I0605 17:44:38.315730  435505 start.go:233] waiting for cluster config update ...
	I0605 17:44:38.315745  435505 start.go:242] writing updated cluster config ...
	I0605 17:44:38.316081  435505 ssh_runner.go:195] Run: rm -f paused
	I0605 17:44:38.380938  435505 start.go:573] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0605 17:44:38.389142  435505 out.go:177] 
	W0605 17:44:38.396537  435505 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0605 17:44:38.401768  435505 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0605 17:44:38.407765  435505 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-980425" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 05 17:47:41 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:41.292267538Z" level=info msg="Stopping pod sandbox: 05f3ea7a1087259be19d7cd16802ea0f101dbb850a4482fb3d0e28b87837856c" id=88eef4b1-e46e-4219-bf38-af9477ba7627 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:41 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:41.294547536Z" level=info msg="Stopped pod sandbox: 05f3ea7a1087259be19d7cd16802ea0f101dbb850a4482fb3d0e28b87837856c" id=88eef4b1-e46e-4219-bf38-af9477ba7627 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:41 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:41.315817071Z" level=info msg="Stopping pod sandbox: 05f3ea7a1087259be19d7cd16802ea0f101dbb850a4482fb3d0e28b87837856c" id=097c8440-81e8-4ae3-9bd6-19efb81e79f9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:41 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:41.315863881Z" level=info msg="Stopped pod sandbox (already stopped): 05f3ea7a1087259be19d7cd16802ea0f101dbb850a4482fb3d0e28b87837856c" id=097c8440-81e8-4ae3-9bd6-19efb81e79f9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:41 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:41.990547822Z" level=info msg="Stopping container: 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100 (timeout: 2s)" id=eb63f5e4-4e13-4962-b2cb-f82ab2fc5533 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:42 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:42.023391647Z" level=info msg="Stopping container: 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100 (timeout: 2s)" id=bccbbf74-bc11-4e01-b756-c0a2f3f779dc name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.022483233Z" level=warning msg="Stopping container 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=eb63f5e4-4e13-4962-b2cb-f82ab2fc5533 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:44 ingress-addon-legacy-980425 conmon[2668]: conmon 39269b50244ff01466a4 <ninfo>: container 2680 exited with status 137
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.206037192Z" level=info msg="Stopped container 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100: ingress-nginx/ingress-nginx-controller-7fcf777cb7-s4qj8/controller" id=bccbbf74-bc11-4e01-b756-c0a2f3f779dc name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.207028895Z" level=info msg="Stopping pod sandbox: 507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" id=ae47d074-21da-400e-bcfa-2844af206e99 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.208712635Z" level=info msg="Stopped container 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100: ingress-nginx/ingress-nginx-controller-7fcf777cb7-s4qj8/controller" id=eb63f5e4-4e13-4962-b2cb-f82ab2fc5533 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.209151215Z" level=info msg="Stopping pod sandbox: 507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" id=129c08a0-3e3c-4d73-8e10-5608e0379206 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.210633093Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-2GS6IBT3MEH6OZQD - [0:0]\n:KUBE-HP-SGRET6DA7LUZ2A5B - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-2GS6IBT3MEH6OZQD\n-X KUBE-HP-SGRET6DA7LUZ2A5B\nCOMMIT\n"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.212214129Z" level=info msg="Closing host port tcp:80"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.212257419Z" level=info msg="Closing host port tcp:443"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.213440655Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.213460536Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.213600950Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-s4qj8 Namespace:ingress-nginx ID:507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00 UID:872b7b9f-0f33-4a28-ad98-cb5f3554429c NetNS:/var/run/netns/3f1dea52-5bbb-4387-a70c-8a2bbaa506ee Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.213753114Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-s4qj8 from CNI network \"kindnet\" (type=ptp)"
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.231475707Z" level=info msg="Stopped pod sandbox: 507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" id=ae47d074-21da-400e-bcfa-2844af206e99 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:44 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:44.231590078Z" level=info msg="Stopped pod sandbox (already stopped): 507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" id=129c08a0-3e3c-4d73-8e10-5608e0379206 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:45 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:45.291737779Z" level=info msg="Stopping container: 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100 (timeout: 2s)" id=d406b2ae-bd72-4ab7-9402-022e7845cbda name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:45 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:45.296123621Z" level=info msg="Stopped container 39269b50244ff01466a45b6181efaf26dce4c4025ea89d471fcd2c615b38d100: ingress-nginx/ingress-nginx-controller-7fcf777cb7-s4qj8/controller" id=d406b2ae-bd72-4ab7-9402-022e7845cbda name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 05 17:47:45 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:45.297236892Z" level=info msg="Stopping pod sandbox: 507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" id=7d33ccfb-37a2-41f3-b6d7-a1b6cec86953 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 05 17:47:45 ingress-addon-legacy-980425 crio[900]: time="2023-06-05 17:47:45.297379964Z" level=info msg="Stopped pod sandbox (already stopped): 507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" id=7d33ccfb-37a2-41f3-b6d7-a1b6cec86953 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a161a3d68efcd       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   9 seconds ago       Exited              hello-world-app           2                   26eac9ca53552       hello-world-app-5f5d8b66bb-6pzjs
	376360facbf56       docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328                    2 minutes ago       Running             nginx                     0                   6c31370ddec4e       nginx
	39269b50244ff       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   507e04e92f4ca       ingress-nginx-controller-7fcf777cb7-s4qj8
	f97719b7c5a18       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   a335ca15bb1d6       ingress-nginx-admission-patch-r885t
	b141799639e0d       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   33425ee942382       ingress-nginx-admission-create-f7kcs
	04cb74735d7d3       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   210d152c97dd7       storage-provisioner
	7d141f7daebdc       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   823c1d64591aa       coredns-66bff467f8-9n2md
	b69472d5f7bfb       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   23fe40cd55cc5       kindnet-gdmmw
	b146157548ab0       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   58e69f8871f88       kube-proxy-6vtpr
	7e40c6771f173       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   ce1dff7355f4d       etcd-ingress-addon-legacy-980425
	19f632c24c184       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   5c1960623dc75       kube-scheduler-ingress-addon-legacy-980425
	3852d1ae233e0       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   199f9d3c7bada       kube-controller-manager-ingress-addon-legacy-980425
	89db8c7434734       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   0e8abc403567f       kube-apiserver-ingress-addon-legacy-980425
	
	* 
	* ==> coredns [7d141f7daebdcb5588fc67aa508b640430050706c4a023c090de021c19b7836e] <==
	* [INFO] 10.244.0.5:56944 - 1688 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053908s
	[INFO] 10.244.0.5:35631 - 39912 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002026517s
	[INFO] 10.244.0.5:56944 - 12030 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057706s
	[INFO] 10.244.0.5:35631 - 10816 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000166498s
	[INFO] 10.244.0.5:56944 - 36690 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001065976s
	[INFO] 10.244.0.5:56944 - 39532 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001029087s
	[INFO] 10.244.0.5:56944 - 59341 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047483s
	[INFO] 10.244.0.5:47830 - 12001 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000094236s
	[INFO] 10.244.0.5:46212 - 52418 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070761s
	[INFO] 10.244.0.5:46212 - 58564 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043816s
	[INFO] 10.244.0.5:46212 - 40524 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038301s
	[INFO] 10.244.0.5:46212 - 37044 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003511s
	[INFO] 10.244.0.5:46212 - 35057 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003465s
	[INFO] 10.244.0.5:46212 - 61093 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034724s
	[INFO] 10.244.0.5:47830 - 18754 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000041829s
	[INFO] 10.244.0.5:47830 - 20146 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044537s
	[INFO] 10.244.0.5:47830 - 39500 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034593s
	[INFO] 10.244.0.5:46212 - 50547 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001310833s
	[INFO] 10.244.0.5:47830 - 27305 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071745s
	[INFO] 10.244.0.5:47830 - 25645 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060947s
	[INFO] 10.244.0.5:46212 - 61679 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002624482s
	[INFO] 10.244.0.5:47830 - 51902 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002128228s
	[INFO] 10.244.0.5:46212 - 36286 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115749s
	[INFO] 10.244.0.5:47830 - 59758 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000938658s
	[INFO] 10.244.0.5:47830 - 6521 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042601s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-980425
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-980425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=ingress-addon-legacy-980425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T17_43_58_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 17:43:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-980425
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 17:47:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 17:47:31 +0000   Mon, 05 Jun 2023 17:43:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 17:47:31 +0000   Mon, 05 Jun 2023 17:43:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 17:47:31 +0000   Mon, 05 Jun 2023 17:43:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 17:47:31 +0000   Mon, 05 Jun 2023 17:44:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-980425
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 561308c984f744428be8a1280b1b969a
	  System UUID:                af8ec7e3-d922-43e0-aefe-57f1b2746850
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-6pzjs                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-9n2md                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m38s
	  kube-system                 etcd-ingress-addon-legacy-980425                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-gdmmw                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-980425             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-980425    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-6vtpr                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-scheduler-ingress-addon-legacy-980425             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m3s (x5 over 4m3s)  kubelet     Node ingress-addon-legacy-980425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x5 over 4m3s)  kubelet     Node ingress-addon-legacy-980425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x5 over 4m3s)  kubelet     Node ingress-addon-legacy-980425 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m49s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s                kubelet     Node ingress-addon-legacy-980425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s                kubelet     Node ingress-addon-legacy-980425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s                kubelet     Node ingress-addon-legacy-980425 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                kubelet     Node ingress-addon-legacy-980425 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001075] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +0.004539] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000e785b4d1
	[  +0.001078] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000005f019a4a
	[  +0.001044] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +3.062644] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000061ba42e8
	[  +0.001124] FS-Cache: O-key=[8] 'd0d1c90000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001040] FS-Cache: N-key=[8] 'd0d1c90000000000'
	[  +0.324591] FS-Cache: Duplicate cookie detected
	[  +0.000707] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000004b485b91
	[  +0.001042] FS-Cache: O-key=[8] 'd6d1c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000003ad92423
	[  +0.001057] FS-Cache: N-key=[8] 'd6d1c90000000000'
	
	* 
	* ==> etcd [7e40c6771f173e3733d93d72ecde09381adc2a77cbfd6713d3f02751e2d5e16f] <==
	* raft2023/06/05 17:43:49 INFO: aec36adc501070cc became follower at term 1
	raft2023/06/05 17:43:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-06-05 17:43:49.657687 W | auth: simple token is not cryptographically signed
	2023-06-05 17:43:49.809580 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-06-05 17:43:49.832110 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/06/05 17:43:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-06-05 17:43:49.859453 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-06-05 17:43:49.907060 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-05 17:43:49.907490 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-05 17:43:49.907556 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/06/05 17:43:50 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/06/05 17:43:50 INFO: aec36adc501070cc became candidate at term 2
	raft2023/06/05 17:43:50 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/06/05 17:43:50 INFO: aec36adc501070cc became leader at term 2
	raft2023/06/05 17:43:50 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-06-05 17:43:50.226132 I | etcdserver: published {Name:ingress-addon-legacy-980425 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-06-05 17:43:50.226309 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-05 17:43:50.226372 I | embed: ready to serve client requests
	2023-06-05 17:43:50.227729 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-05 17:43:50.227791 I | embed: ready to serve client requests
	2023-06-05 17:43:50.320207 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-05 17:43:50.320345 I | etcdserver/api: enabled capabilities for version 3.4
	2023-06-05 17:43:50.324623 I | embed: serving client requests on 192.168.49.2:2379
	2023-06-05 17:44:12.869730 W | etcdserver: request "header:<ID:8128021565199727197 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:327 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3791 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:3568" took too long (119.13625ms) to execute
	2023-06-05 17:44:12.870951 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (125.674979ms) to execute
	
	* 
	* ==> kernel <==
	*  17:47:50 up  2:30,  0 users,  load average: 0.27, 0.94, 1.90
	Linux ingress-addon-legacy-980425 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [b69472d5f7bfba54166180862ad2722b6a4bf33251007de4bbddac32ff23e0e1] <==
	* I0605 17:45:46.105076       1 main.go:227] handling current node
	I0605 17:45:56.114297       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:45:56.114327       1 main.go:227] handling current node
	I0605 17:46:06.118344       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:46:06.118377       1 main.go:227] handling current node
	I0605 17:46:16.122391       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:46:16.122423       1 main.go:227] handling current node
	I0605 17:46:26.133089       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:46:26.133116       1 main.go:227] handling current node
	I0605 17:46:36.145392       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:46:36.145434       1 main.go:227] handling current node
	I0605 17:46:46.150131       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:46:46.150162       1 main.go:227] handling current node
	I0605 17:46:56.156512       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:46:56.156544       1 main.go:227] handling current node
	I0605 17:47:06.160462       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:47:06.160491       1 main.go:227] handling current node
	I0605 17:47:16.164776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:47:16.164807       1 main.go:227] handling current node
	I0605 17:47:26.177189       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:47:26.177380       1 main.go:227] handling current node
	I0605 17:47:36.184808       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:47:36.184839       1 main.go:227] handling current node
	I0605 17:47:46.193878       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0605 17:47:46.193904       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [89db8c74347345b476c3cdec11b1a7b4f48501a9e669d1bba913ebb0d4935ba7] <==
	* I0605 17:43:54.376717       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0605 17:43:54.470578       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0605 17:43:54.470626       1 cache.go:39] Caches are synced for autoregister controller
	I0605 17:43:54.481360       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0605 17:43:54.481458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0605 17:43:54.482570       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0605 17:43:54.493689       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0605 17:43:55.276055       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0605 17:43:55.276084       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0605 17:43:55.282583       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0605 17:43:55.287436       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0605 17:43:55.287525       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0605 17:43:55.696610       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0605 17:43:55.735055       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0605 17:43:55.808532       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0605 17:43:55.809685       1 controller.go:609] quota admission added evaluator for: endpoints
	I0605 17:43:55.813261       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0605 17:43:56.701438       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0605 17:43:57.655462       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0605 17:43:57.764722       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0605 17:44:01.109709       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0605 17:44:12.514978       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0605 17:44:12.527243       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0605 17:44:39.077618       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0605 17:45:02.530653       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [3852d1ae233e0d78d878d935db4bf0eb421c68d067ef6fe8c817aed77dcba1cd] <==
	* I0605 17:44:12.550726       1 shared_informer.go:230] Caches are synced for GC 
	I0605 17:44:12.554427       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0605 17:44:12.554576       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0605 17:44:12.560097       1 shared_informer.go:230] Caches are synced for resource quota 
	I0605 17:44:12.591868       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"b33a190a-2ca9-425e-b5ed-848add2ec09e", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0605 17:44:12.604357       1 shared_informer.go:230] Caches are synced for stateful set 
	I0605 17:44:12.654492       1 shared_informer.go:230] Caches are synced for resource quota 
	I0605 17:44:12.655027       1 shared_informer.go:230] Caches are synced for attach detach 
	I0605 17:44:12.673990       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0605 17:44:12.674081       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0605 17:44:12.689413       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"905b8bcc-ce5d-426a-9e01-74f7121d1599", APIVersion:"apps/v1", ResourceVersion:"234", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-gdmmw
	I0605 17:44:12.692021       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0605 17:44:12.731883       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d064a817-79ff-4659-8715-beebd0b8dc59", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9n2md
	I0605 17:44:12.756857       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0605 17:44:12.863797       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4f58bec2-2c75-425e-a79b-a728fc823ccf", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-6vtpr
	I0605 17:44:22.503363       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0605 17:44:39.052956       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"cf4630f5-1064-4c3c-a367-7ecea8ba19b4", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0605 17:44:39.070294       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"b05773c7-a78e-49a3-b793-b0b59d7fe382", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-s4qj8
	I0605 17:44:39.129437       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4246a4ed-3e78-4c7e-af54-47c834c22d50", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-f7kcs
	I0605 17:44:39.152424       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"30f51d38-1ff8-423c-b69a-87b8140f91e4", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-r885t
	I0605 17:44:41.650655       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4246a4ed-3e78-4c7e-af54-47c834c22d50", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0605 17:44:42.645780       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"30f51d38-1ff8-423c-b69a-87b8140f91e4", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0605 17:47:23.661723       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"a2e941dc-882c-43dc-a255-f7b70235738f", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0605 17:47:23.678226       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d5aab86a-c7e7-40bd-b601-f708f64f0667", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-6pzjs
	E0605 17:47:46.656879       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-khckm" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [b146157548ab09f4fc30aadd82cb464cda2e3987f5f4592de1c2d4b4837a1dd2] <==
	* W0605 17:44:13.581671       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0605 17:44:13.593224       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0605 17:44:13.593276       1 server_others.go:186] Using iptables Proxier.
	I0605 17:44:13.593688       1 server.go:583] Version: v1.18.20
	I0605 17:44:13.594828       1 config.go:315] Starting service config controller
	I0605 17:44:13.594899       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0605 17:44:13.594981       1 config.go:133] Starting endpoints config controller
	I0605 17:44:13.595028       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0605 17:44:13.702869       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0605 17:44:13.702885       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [19f632c24c1842ce6cca926245f495d9bc74f91b56fef6cdefa0a6c66a272910] <==
	* W0605 17:43:54.417151       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0605 17:43:54.417197       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0605 17:43:54.466254       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0605 17:43:54.466280       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0605 17:43:54.468198       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0605 17:43:54.468379       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 17:43:54.468392       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 17:43:54.468414       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0605 17:43:54.477698       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0605 17:43:54.477820       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0605 17:43:54.477902       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0605 17:43:54.477980       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0605 17:43:54.478052       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0605 17:43:54.478117       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0605 17:43:54.478184       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0605 17:43:54.478248       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0605 17:43:54.478311       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0605 17:43:54.479572       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0605 17:43:54.479665       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0605 17:43:54.479822       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0605 17:43:55.336088       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0605 17:43:55.449951       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0605 17:43:55.529972       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0605 17:43:55.868619       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0605 17:44:12.961590       1 factory.go:503] pod: kube-system/coredns-66bff467f8-9n2md is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Jun 05 17:47:29 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:29.009384    1622 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9a0ae5de0b0fbd788a4a6f70e45fcbed377ba81d1fc7f3d46a542b020518d884
	Jun 05 17:47:29 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:29.009628    1622 pod_workers.go:191] Error syncing pod eadbbda8-d147-4555-bb38-eb41953bab77 ("hello-world-app-5f5d8b66bb-6pzjs_default(eadbbda8-d147-4555-bb38-eb41953bab77)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-6pzjs_default(eadbbda8-d147-4555-bb38-eb41953bab77)"
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:39.291908    1622 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:39.291963    1622 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:39.292006    1622 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:39.292039    1622 pod_workers.go:191] Error syncing pod 47ae0fe7-c463-44d2-8142-2268761259d9 ("kube-ingress-dns-minikube_kube-system(47ae0fe7-c463-44d2-8142-2268761259d9)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:39.449352    1622 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-w9krl: secret "minikube-ingress-dns-token-w9krl" not found
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:39.449476    1622 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/47ae0fe7-c463-44d2-8142-2268761259d9-minikube-ingress-dns-token-w9krl podName:47ae0fe7-c463-44d2-8142-2268761259d9 nodeName:}" failed. No retries permitted until 2023-06-05 17:47:39.949443873 +0000 UTC m=+222.359740561 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-w9krl\" (UniqueName: \"kubernetes.io/secret/47ae0fe7-c463-44d2-8142-2268761259d9-minikube-ingress-dns-token-w9krl\") pod \"kube-ingress-dns-minikube\" (UID: \"47ae0fe7-c463-44d2-8142-2268761259d9\") : secret \"minikube-ingress-dns-token-w9krl\" not found"
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:39.549483    1622 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-w9krl" (UniqueName: "kubernetes.io/secret/47ae0fe7-c463-44d2-8142-2268761259d9-minikube-ingress-dns-token-w9krl") pod "47ae0fe7-c463-44d2-8142-2268761259d9" (UID: "47ae0fe7-c463-44d2-8142-2268761259d9")
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:39.556245    1622 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47ae0fe7-c463-44d2-8142-2268761259d9-minikube-ingress-dns-token-w9krl" (OuterVolumeSpecName: "minikube-ingress-dns-token-w9krl") pod "47ae0fe7-c463-44d2-8142-2268761259d9" (UID: "47ae0fe7-c463-44d2-8142-2268761259d9"). InnerVolumeSpecName "minikube-ingress-dns-token-w9krl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 05 17:47:39 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:39.649899    1622 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-w9krl" (UniqueName: "kubernetes.io/secret/47ae0fe7-c463-44d2-8142-2268761259d9-minikube-ingress-dns-token-w9krl") on node "ingress-addon-legacy-980425" DevicePath ""
	Jun 05 17:47:40 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:40.290926    1622 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9a0ae5de0b0fbd788a4a6f70e45fcbed377ba81d1fc7f3d46a542b020518d884
	Jun 05 17:47:41 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:41.035305    1622 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9a0ae5de0b0fbd788a4a6f70e45fcbed377ba81d1fc7f3d46a542b020518d884
	Jun 05 17:47:41 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:41.040768    1622 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a161a3d68efcd0d89177fc60cf0d855cff458479e1791d71bba81f17e202c7e4
	Jun 05 17:47:41 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:41.041251    1622 pod_workers.go:191] Error syncing pod eadbbda8-d147-4555-bb38-eb41953bab77 ("hello-world-app-5f5d8b66bb-6pzjs_default(eadbbda8-d147-4555-bb38-eb41953bab77)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-6pzjs_default(eadbbda8-d147-4555-bb38-eb41953bab77)"
	Jun 05 17:47:41 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:41.992610    1622 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-s4qj8.1765d47b66bdb099", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-s4qj8", UID:"872b7b9f-0f33-4a28-ad98-cb5f3554429c", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-980425"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc117a5cf7b008e99, ext:224400188938, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc117a5cf7b008e99, ext:224400188938, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-s4qj8.1765d47b66bdb099" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 05 17:47:42 ingress-addon-legacy-980425 kubelet[1622]: E0605 17:47:42.029907    1622 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-s4qj8.1765d47b66bdb099", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-s4qj8", UID:"872b7b9f-0f33-4a28-ad98-cb5f3554429c", APIVersion:"v1", ResourceVersion:"474", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-980425"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc117a5cf7b008e99, ext:224400188938, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc117a5cf815ab796, ext:224433019142, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-s4qj8.1765d47b66bdb099" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 05 17:47:42 ingress-addon-legacy-980425 kubelet[1622]: W0605 17:47:42.039252    1622 pod_container_deletor.go:77] Container "05f3ea7a1087259be19d7cd16802ea0f101dbb850a4482fb3d0e28b87837856c" not found in pod's containers
	Jun 05 17:47:45 ingress-addon-legacy-980425 kubelet[1622]: W0605 17:47:45.047037    1622 pod_container_deletor.go:77] Container "507e04e92f4caaccfe4269a2460148e3573ab897b4bc4f3d74cf7841d48d8e00" not found in pod's containers
	Jun 05 17:47:46 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:46.165719    1622 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7zhn9" (UniqueName: "kubernetes.io/secret/872b7b9f-0f33-4a28-ad98-cb5f3554429c-ingress-nginx-token-7zhn9") pod "872b7b9f-0f33-4a28-ad98-cb5f3554429c" (UID: "872b7b9f-0f33-4a28-ad98-cb5f3554429c")
	Jun 05 17:47:46 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:46.165781    1622 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/872b7b9f-0f33-4a28-ad98-cb5f3554429c-webhook-cert") pod "872b7b9f-0f33-4a28-ad98-cb5f3554429c" (UID: "872b7b9f-0f33-4a28-ad98-cb5f3554429c")
	Jun 05 17:47:46 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:46.171550    1622 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872b7b9f-0f33-4a28-ad98-cb5f3554429c-ingress-nginx-token-7zhn9" (OuterVolumeSpecName: "ingress-nginx-token-7zhn9") pod "872b7b9f-0f33-4a28-ad98-cb5f3554429c" (UID: "872b7b9f-0f33-4a28-ad98-cb5f3554429c"). InnerVolumeSpecName "ingress-nginx-token-7zhn9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 05 17:47:46 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:46.172276    1622 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872b7b9f-0f33-4a28-ad98-cb5f3554429c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "872b7b9f-0f33-4a28-ad98-cb5f3554429c" (UID: "872b7b9f-0f33-4a28-ad98-cb5f3554429c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 05 17:47:46 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:46.266084    1622 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7zhn9" (UniqueName: "kubernetes.io/secret/872b7b9f-0f33-4a28-ad98-cb5f3554429c-ingress-nginx-token-7zhn9") on node "ingress-addon-legacy-980425" DevicePath ""
	Jun 05 17:47:46 ingress-addon-legacy-980425 kubelet[1622]: I0605 17:47:46.266134    1622 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/872b7b9f-0f33-4a28-ad98-cb5f3554429c-webhook-cert") on node "ingress-addon-legacy-980425" DevicePath ""
	
	* 
	* ==> storage-provisioner [04cb74735d7d366ac50fcdfc1594301fd6aa8039c3db872f2a4a8891335c956d] <==
	* I0605 17:44:29.087010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0605 17:44:29.103554       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0605 17:44:29.103732       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0605 17:44:29.110631       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0605 17:44:29.111661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-980425_5d73a1c1-4b1f-4a18-99d5-eb43af94d2c9!
	I0605 17:44:29.111062       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9357295b-0ce8-42f9-9a8b-48f3385a931a", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-980425_5d73a1c1-4b1f-4a18-99d5-eb43af94d2c9 became leader
	I0605 17:44:29.212859       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-980425_5d73a1c1-4b1f-4a18-99d5-eb43af94d2c9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-980425 -n ingress-addon-legacy-980425
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-980425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-8g86r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-8g86r -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-8g86r -- sh -c "ping -c 1 192.168.58.1": exit status 1 (243.249554ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-8g86r): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-mtn99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-mtn99 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-mtn99 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (249.532875ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-mtn99): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-292850
helpers_test.go:235: (dbg) docker inspect multinode-292850:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47",
	        "Created": "2023-06-05T17:54:26.103512512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 472237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T17:54:26.446153778Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/hosts",
	        "LogPath": "/var/lib/docker/containers/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47-json.log",
	        "Name": "/multinode-292850",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-292850:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-292850",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/707e1d0dcbb1c9d8d9e16fd312940d8f2a0656c75c25676663095e51edf35294-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/707e1d0dcbb1c9d8d9e16fd312940d8f2a0656c75c25676663095e51edf35294/merged",
	                "UpperDir": "/var/lib/docker/overlay2/707e1d0dcbb1c9d8d9e16fd312940d8f2a0656c75c25676663095e51edf35294/diff",
	                "WorkDir": "/var/lib/docker/overlay2/707e1d0dcbb1c9d8d9e16fd312940d8f2a0656c75c25676663095e51edf35294/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-292850",
	                "Source": "/var/lib/docker/volumes/multinode-292850/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-292850",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-292850",
	                "name.minikube.sigs.k8s.io": "multinode-292850",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b41a54904604cdba2060822c8f0639b0b65c6dac259c3cd9f57c7f409d030d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6b41a5490460",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-292850": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ae5a1ee2c03e",
	                        "multinode-292850"
	                    ],
	                    "NetworkID": "bf137ee0d3bb47f031cbc38ef3eda83e489cf38f92bebfb7eea40737a0d88c3e",
	                    "EndpointID": "96d954f514ee2f3cf635ce9c75f17814b7d0b2902498c84ddf6ac19a8be35734",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-292850 -n multinode-292850
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-292850 logs -n 25: (1.847715263s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-138151                           | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:53 UTC | 05 Jun 23 17:54 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-138151 ssh -- ls                    | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-136395                           | mount-start-1-136395 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-138151 ssh -- ls                    | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-138151                           | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	| start   | -p mount-start-2-138151                           | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	| ssh     | mount-start-2-138151 ssh -- ls                    | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-138151                           | mount-start-2-138151 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	| delete  | -p mount-start-1-136395                           | mount-start-1-136395 | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:54 UTC |
	| start   | -p multinode-292850                               | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:54 UTC | 05 Jun 23 17:56 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- apply -f                   | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- rollout                    | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- get pods -o                | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- get pods -o                | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-8g86r --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-mtn99 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-8g86r --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-mtn99 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-8g86r -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-mtn99 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- get pods -o                | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-8g86r                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC |                     |
	|         | busybox-67b7f59bb-8g86r -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC | 05 Jun 23 17:56 UTC |
	|         | busybox-67b7f59bb-mtn99                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-292850 -- exec                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:56 UTC |                     |
	|         | busybox-67b7f59bb-mtn99 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 17:54:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 17:54:20.720222  471785 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:54:20.720429  471785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:54:20.720440  471785 out.go:309] Setting ErrFile to fd 2...
	I0605 17:54:20.720447  471785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:54:20.720673  471785 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:54:20.721178  471785 out.go:303] Setting JSON to false
	I0605 17:54:20.722290  471785 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9393,"bootTime":1685978268,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:54:20.722392  471785 start.go:137] virtualization:  
	I0605 17:54:20.725732  471785 out.go:177] * [multinode-292850] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:54:20.729126  471785 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 17:54:20.729343  471785 notify.go:220] Checking for updates...
	I0605 17:54:20.733423  471785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:54:20.735683  471785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:54:20.737872  471785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:54:20.740034  471785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 17:54:20.742009  471785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 17:54:20.744354  471785 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:54:20.769402  471785 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:54:20.769502  471785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:54:20.842925  471785 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-06-05 17:54:20.832472829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:54:20.843034  471785 docker.go:294] overlay module found
	I0605 17:54:20.845507  471785 out.go:177] * Using the docker driver based on user configuration
	I0605 17:54:20.847551  471785 start.go:297] selected driver: docker
	I0605 17:54:20.847571  471785 start.go:875] validating driver "docker" against <nil>
	I0605 17:54:20.847586  471785 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 17:54:20.848484  471785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:54:20.908528  471785 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-06-05 17:54:20.898026406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:54:20.908706  471785 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0605 17:54:20.908976  471785 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 17:54:20.911229  471785 out.go:177] * Using Docker driver with root privileges
	I0605 17:54:20.913566  471785 cni.go:84] Creating CNI manager for ""
	I0605 17:54:20.913590  471785 cni.go:136] 0 nodes found, recommending kindnet
	I0605 17:54:20.913601  471785 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0605 17:54:20.913614  471785 start_flags.go:319] config:
	{Name:multinode-292850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:54:20.916239  471785 out.go:177] * Starting control plane node multinode-292850 in cluster multinode-292850
	I0605 17:54:20.918313  471785 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:54:20.920154  471785 out.go:177] * Pulling base image ...
	I0605 17:54:20.921888  471785 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:54:20.921910  471785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:54:20.921944  471785 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 17:54:20.921955  471785 cache.go:57] Caching tarball of preloaded images
	I0605 17:54:20.922023  471785 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 17:54:20.922033  471785 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 17:54:20.922396  471785 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/config.json ...
	I0605 17:54:20.922424  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/config.json: {Name:mk960f826efc82d84c1e387924a097a3e51e7bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:20.941653  471785 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 17:54:20.941675  471785 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 17:54:20.941690  471785 cache.go:195] Successfully downloaded all kic artifacts
	I0605 17:54:20.941728  471785 start.go:364] acquiring machines lock for multinode-292850: {Name:mk1b17f87b7dc66c88bb0aa0000ec923c29b04ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 17:54:20.942264  471785 start.go:368] acquired machines lock for "multinode-292850" in 509.094µs
	I0605 17:54:20.942311  471785 start.go:93] Provisioning new machine with config: &{Name:multinode-292850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:54:20.942424  471785 start.go:125] createHost starting for "" (driver="docker")
	I0605 17:54:20.944793  471785 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0605 17:54:20.945089  471785 start.go:159] libmachine.API.Create for "multinode-292850" (driver="docker")
	I0605 17:54:20.945118  471785 client.go:168] LocalClient.Create starting
	I0605 17:54:20.945183  471785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem
	I0605 17:54:20.945226  471785 main.go:141] libmachine: Decoding PEM data...
	I0605 17:54:20.945245  471785 main.go:141] libmachine: Parsing certificate...
	I0605 17:54:20.945324  471785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem
	I0605 17:54:20.945344  471785 main.go:141] libmachine: Decoding PEM data...
	I0605 17:54:20.945356  471785 main.go:141] libmachine: Parsing certificate...
	I0605 17:54:20.945767  471785 cli_runner.go:164] Run: docker network inspect multinode-292850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0605 17:54:20.964080  471785 cli_runner.go:211] docker network inspect multinode-292850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0605 17:54:20.964168  471785 network_create.go:281] running [docker network inspect multinode-292850] to gather additional debugging logs...
	I0605 17:54:20.964188  471785 cli_runner.go:164] Run: docker network inspect multinode-292850
	W0605 17:54:20.982254  471785 cli_runner.go:211] docker network inspect multinode-292850 returned with exit code 1
	I0605 17:54:20.982291  471785 network_create.go:284] error running [docker network inspect multinode-292850]: docker network inspect multinode-292850: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-292850 not found
	I0605 17:54:20.982304  471785 network_create.go:286] output of [docker network inspect multinode-292850]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-292850 not found
	
	** /stderr **
	I0605 17:54:20.982372  471785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:54:21.004595  471785 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-491b70b5fde1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:06:2d:63:c0} reservation:<nil>}
	I0605 17:54:21.005040  471785 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000c57160}
	I0605 17:54:21.005074  471785 network_create.go:123] attempt to create docker network multinode-292850 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0605 17:54:21.005146  471785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-292850 multinode-292850
	I0605 17:54:21.090453  471785 network_create.go:107] docker network multinode-292850 192.168.58.0/24 created
	I0605 17:54:21.090487  471785 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-292850" container
	I0605 17:54:21.090576  471785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0605 17:54:21.108370  471785 cli_runner.go:164] Run: docker volume create multinode-292850 --label name.minikube.sigs.k8s.io=multinode-292850 --label created_by.minikube.sigs.k8s.io=true
	I0605 17:54:21.127405  471785 oci.go:103] Successfully created a docker volume multinode-292850
	I0605 17:54:21.127509  471785 cli_runner.go:164] Run: docker run --rm --name multinode-292850-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-292850 --entrypoint /usr/bin/test -v multinode-292850:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib
	I0605 17:54:21.721739  471785 oci.go:107] Successfully prepared a docker volume multinode-292850
	I0605 17:54:21.721776  471785 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:54:21.721805  471785 kic.go:190] Starting extracting preloaded images to volume ...
	I0605 17:54:21.721930  471785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-292850:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir
	I0605 17:54:26.016535  471785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-292850:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir: (4.294552203s)
	I0605 17:54:26.016569  471785 kic.go:199] duration metric: took 4.294768 seconds to extract preloaded images to volume
	W0605 17:54:26.016741  471785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0605 17:54:26.016867  471785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0605 17:54:26.082238  471785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-292850 --name multinode-292850 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-292850 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-292850 --network multinode-292850 --ip 192.168.58.2 --volume multinode-292850:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f
	I0605 17:54:26.455128  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Running}}
	I0605 17:54:26.479862  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:54:26.511800  471785 cli_runner.go:164] Run: docker exec multinode-292850 stat /var/lib/dpkg/alternatives/iptables
	I0605 17:54:26.591627  471785 oci.go:144] the created container "multinode-292850" has a running status.
	I0605 17:54:26.591662  471785 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa...
	I0605 17:54:27.521374  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0605 17:54:27.521572  471785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0605 17:54:27.547991  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:54:27.578759  471785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0605 17:54:27.578780  471785 kic_runner.go:114] Args: [docker exec --privileged multinode-292850 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0605 17:54:27.660732  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:54:27.688262  471785 machine.go:88] provisioning docker machine ...
	I0605 17:54:27.688298  471785 ubuntu.go:169] provisioning hostname "multinode-292850"
	I0605 17:54:27.688370  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:27.717101  471785 main.go:141] libmachine: Using SSH client type: native
	I0605 17:54:27.717579  471785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0605 17:54:27.717593  471785 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-292850 && echo "multinode-292850" | sudo tee /etc/hostname
	I0605 17:54:27.884216  471785 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-292850
	
	I0605 17:54:27.884301  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:27.906418  471785 main.go:141] libmachine: Using SSH client type: native
	I0605 17:54:27.906890  471785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0605 17:54:27.906915  471785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-292850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-292850/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-292850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 17:54:28.053838  471785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 17:54:28.053910  471785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 17:54:28.053949  471785 ubuntu.go:177] setting up certificates
	I0605 17:54:28.053987  471785 provision.go:83] configureAuth start
	I0605 17:54:28.054095  471785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850
	I0605 17:54:28.074826  471785 provision.go:138] copyHostCerts
	I0605 17:54:28.074874  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 17:54:28.074910  471785 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 17:54:28.074917  471785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 17:54:28.075001  471785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 17:54:28.075094  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 17:54:28.075111  471785 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 17:54:28.075115  471785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 17:54:28.075148  471785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 17:54:28.075201  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 17:54:28.075217  471785 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 17:54:28.075221  471785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 17:54:28.075252  471785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 17:54:28.075306  471785 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.multinode-292850 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-292850]
	I0605 17:54:28.376992  471785 provision.go:172] copyRemoteCerts
	I0605 17:54:28.377065  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 17:54:28.377107  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:28.395737  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:54:28.499347  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0605 17:54:28.499413  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 17:54:28.529511  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0605 17:54:28.529593  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0605 17:54:28.559479  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0605 17:54:28.559542  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0605 17:54:28.588612  471785 provision.go:86] duration metric: configureAuth took 534.569055ms
	I0605 17:54:28.588636  471785 ubuntu.go:193] setting minikube options for container-runtime
	I0605 17:54:28.588836  471785 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:54:28.588949  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:28.610592  471785 main.go:141] libmachine: Using SSH client type: native
	I0605 17:54:28.611048  471785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0605 17:54:28.611071  471785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 17:54:28.873247  471785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 17:54:28.873269  471785 machine.go:91] provisioned docker machine in 1.184983183s
	I0605 17:54:28.873279  471785 client.go:171] LocalClient.Create took 7.928155969s
	I0605 17:54:28.873291  471785 start.go:167] duration metric: libmachine.API.Create for "multinode-292850" took 7.928203057s
	I0605 17:54:28.873299  471785 start.go:300] post-start starting for "multinode-292850" (driver="docker")
	I0605 17:54:28.873305  471785 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 17:54:28.873380  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 17:54:28.873432  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:28.903465  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:54:29.005412  471785 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 17:54:29.010307  471785 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0605 17:54:29.010326  471785 command_runner.go:130] > NAME="Ubuntu"
	I0605 17:54:29.010334  471785 command_runner.go:130] > VERSION_ID="22.04"
	I0605 17:54:29.010340  471785 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0605 17:54:29.010346  471785 command_runner.go:130] > VERSION_CODENAME=jammy
	I0605 17:54:29.010350  471785 command_runner.go:130] > ID=ubuntu
	I0605 17:54:29.010355  471785 command_runner.go:130] > ID_LIKE=debian
	I0605 17:54:29.010361  471785 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0605 17:54:29.010367  471785 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0605 17:54:29.010375  471785 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0605 17:54:29.010387  471785 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0605 17:54:29.010395  471785 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0605 17:54:29.010458  471785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 17:54:29.010484  471785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 17:54:29.010501  471785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 17:54:29.010508  471785 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 17:54:29.010521  471785 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 17:54:29.010588  471785 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 17:54:29.010676  471785 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 17:54:29.010688  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> /etc/ssl/certs/4078132.pem
	I0605 17:54:29.010842  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 17:54:29.022369  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 17:54:29.052738  471785 start.go:303] post-start completed in 179.423004ms
	I0605 17:54:29.053179  471785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850
	I0605 17:54:29.071914  471785 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/config.json ...
	I0605 17:54:29.072282  471785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:54:29.072340  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:29.090870  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:54:29.186108  471785 command_runner.go:130] > 16%!
	(MISSING)I0605 17:54:29.186184  471785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 17:54:29.191702  471785 command_runner.go:130] > 165G
	I0605 17:54:29.192100  471785 start.go:128] duration metric: createHost completed in 8.249663357s
	I0605 17:54:29.192117  471785 start.go:83] releasing machines lock for "multinode-292850", held for 8.249833178s
	I0605 17:54:29.192219  471785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850
	I0605 17:54:29.209942  471785 ssh_runner.go:195] Run: cat /version.json
	I0605 17:54:29.209995  471785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 17:54:29.210066  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:29.209997  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:54:29.230468  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:54:29.232305  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:54:29.463635  471785 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0605 17:54:29.463688  471785 command_runner.go:130] > {"iso_version": "v1.30.1-1685728855-16612", "kicbase_version": "v0.0.39-1685959646-16634", "minikube_version": "v1.30.1", "commit": "3e578f96d97d80d30b72fb3b092a960e38fcaaa2"}
	I0605 17:54:29.463817  471785 ssh_runner.go:195] Run: systemctl --version
	I0605 17:54:29.469674  471785 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0605 17:54:29.469708  471785 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0605 17:54:29.469987  471785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 17:54:29.619913  471785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 17:54:29.625722  471785 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0605 17:54:29.625747  471785 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0605 17:54:29.625755  471785 command_runner.go:130] > Device: 3ah/58d	Inode: 3638838     Links: 1
	I0605 17:54:29.625763  471785 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0605 17:54:29.625770  471785 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0605 17:54:29.625776  471785 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0605 17:54:29.625782  471785 command_runner.go:130] > Change: 2023-06-05 17:31:00.544911935 +0000
	I0605 17:54:29.625789  471785 command_runner.go:130] >  Birth: 2023-06-05 17:31:00.544911935 +0000
	I0605 17:54:29.625861  471785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:54:29.651454  471785 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 17:54:29.651616  471785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:54:29.695248  471785 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0605 17:54:29.695284  471785 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0605 17:54:29.695292  471785 start.go:481] detecting cgroup driver to use...
	I0605 17:54:29.695323  471785 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 17:54:29.695381  471785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 17:54:29.715293  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 17:54:29.730312  471785 docker.go:193] disabling cri-docker service (if available) ...
	I0605 17:54:29.730424  471785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 17:54:29.747417  471785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 17:54:29.764582  471785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 17:54:29.876191  471785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 17:54:29.987425  471785 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0605 17:54:29.987454  471785 docker.go:209] disabling docker service ...
	I0605 17:54:29.987510  471785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 17:54:30.031902  471785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 17:54:30.065098  471785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 17:54:30.181342  471785 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0605 17:54:30.181450  471785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 17:54:30.293785  471785 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0605 17:54:30.293880  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 17:54:30.309492  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 17:54:30.330120  471785 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0605 17:54:30.331598  471785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0605 17:54:30.331711  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:54:30.344685  471785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 17:54:30.344759  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:54:30.357650  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:54:30.370666  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:54:30.383845  471785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 17:54:30.395515  471785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 17:54:30.406073  471785 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0605 17:54:30.407275  471785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 17:54:30.418393  471785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 17:54:30.517950  471785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 17:54:30.656248  471785 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 17:54:30.656349  471785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 17:54:30.661445  471785 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0605 17:54:30.661515  471785 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0605 17:54:30.661536  471785 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0605 17:54:30.661561  471785 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0605 17:54:30.661594  471785 command_runner.go:130] > Access: 2023-06-05 17:54:30.640343209 +0000
	I0605 17:54:30.661627  471785 command_runner.go:130] > Modify: 2023-06-05 17:54:30.640343209 +0000
	I0605 17:54:30.661651  471785 command_runner.go:130] > Change: 2023-06-05 17:54:30.640343209 +0000
	I0605 17:54:30.661679  471785 command_runner.go:130] >  Birth: -
	I0605 17:54:30.661957  471785 start.go:549] Will wait 60s for crictl version
	I0605 17:54:30.662048  471785 ssh_runner.go:195] Run: which crictl
	I0605 17:54:30.666855  471785 command_runner.go:130] > /usr/bin/crictl
	I0605 17:54:30.666965  471785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 17:54:30.713756  471785 command_runner.go:130] > Version:  0.1.0
	I0605 17:54:30.713776  471785 command_runner.go:130] > RuntimeName:  cri-o
	I0605 17:54:30.713782  471785 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0605 17:54:30.713789  471785 command_runner.go:130] > RuntimeApiVersion:  v1
	I0605 17:54:30.716651  471785 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 17:54:30.716778  471785 ssh_runner.go:195] Run: crio --version
	I0605 17:54:30.764711  471785 command_runner.go:130] > crio version 1.24.5
	I0605 17:54:30.764737  471785 command_runner.go:130] > Version:          1.24.5
	I0605 17:54:30.764746  471785 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0605 17:54:30.764760  471785 command_runner.go:130] > GitTreeState:     clean
	I0605 17:54:30.764768  471785 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0605 17:54:30.764773  471785 command_runner.go:130] > GoVersion:        go1.18.2
	I0605 17:54:30.764779  471785 command_runner.go:130] > Compiler:         gc
	I0605 17:54:30.764784  471785 command_runner.go:130] > Platform:         linux/arm64
	I0605 17:54:30.764790  471785 command_runner.go:130] > Linkmode:         dynamic
	I0605 17:54:30.764804  471785 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0605 17:54:30.764813  471785 command_runner.go:130] > SeccompEnabled:   true
	I0605 17:54:30.764818  471785 command_runner.go:130] > AppArmorEnabled:  false
	I0605 17:54:30.767064  471785 ssh_runner.go:195] Run: crio --version
	I0605 17:54:30.815453  471785 command_runner.go:130] > crio version 1.24.5
	I0605 17:54:30.815479  471785 command_runner.go:130] > Version:          1.24.5
	I0605 17:54:30.815500  471785 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0605 17:54:30.815512  471785 command_runner.go:130] > GitTreeState:     clean
	I0605 17:54:30.815519  471785 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0605 17:54:30.815528  471785 command_runner.go:130] > GoVersion:        go1.18.2
	I0605 17:54:30.815534  471785 command_runner.go:130] > Compiler:         gc
	I0605 17:54:30.815543  471785 command_runner.go:130] > Platform:         linux/arm64
	I0605 17:54:30.815550  471785 command_runner.go:130] > Linkmode:         dynamic
	I0605 17:54:30.815562  471785 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0605 17:54:30.815579  471785 command_runner.go:130] > SeccompEnabled:   true
	I0605 17:54:30.815588  471785 command_runner.go:130] > AppArmorEnabled:  false
	I0605 17:54:30.819834  471785 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0605 17:54:30.821944  471785 cli_runner.go:164] Run: docker network inspect multinode-292850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:54:30.840494  471785 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0605 17:54:30.845375  471785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:54:30.860780  471785 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:54:30.860859  471785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:54:30.926125  471785 command_runner.go:130] > {
	I0605 17:54:30.926149  471785 command_runner.go:130] >   "images": [
	I0605 17:54:30.926155  471785 command_runner.go:130] >     {
	I0605 17:54:30.926173  471785 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0605 17:54:30.926178  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926186  471785 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0605 17:54:30.926191  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926196  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926214  471785 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0605 17:54:30.926224  471785 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0605 17:54:30.926232  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926237  471785 command_runner.go:130] >       "size": "60881430",
	I0605 17:54:30.926242  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.926251  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926258  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926265  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926269  471785 command_runner.go:130] >     },
	I0605 17:54:30.926274  471785 command_runner.go:130] >     {
	I0605 17:54:30.926285  471785 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0605 17:54:30.926290  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926299  471785 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0605 17:54:30.926304  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926311  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926321  471785 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0605 17:54:30.926334  471785 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0605 17:54:30.926341  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926349  471785 command_runner.go:130] >       "size": "29037500",
	I0605 17:54:30.926357  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.926362  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926371  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926375  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926380  471785 command_runner.go:130] >     },
	I0605 17:54:30.926386  471785 command_runner.go:130] >     {
	I0605 17:54:30.926395  471785 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0605 17:54:30.926402  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926409  471785 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0605 17:54:30.926416  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926421  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926430  471785 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0605 17:54:30.926443  471785 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0605 17:54:30.926447  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926456  471785 command_runner.go:130] >       "size": "51393451",
	I0605 17:54:30.926460  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.926465  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926473  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926478  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926482  471785 command_runner.go:130] >     },
	I0605 17:54:30.926489  471785 command_runner.go:130] >     {
	I0605 17:54:30.926497  471785 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0605 17:54:30.926505  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926512  471785 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0605 17:54:30.926519  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926524  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926533  471785 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0605 17:54:30.926545  471785 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0605 17:54:30.926553  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926561  471785 command_runner.go:130] >       "size": "182283991",
	I0605 17:54:30.926566  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.926571  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.926577  471785 command_runner.go:130] >       },
	I0605 17:54:30.926582  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926588  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926595  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926600  471785 command_runner.go:130] >     },
	I0605 17:54:30.926604  471785 command_runner.go:130] >     {
	I0605 17:54:30.926615  471785 command_runner.go:130] >       "id": "72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae",
	I0605 17:54:30.926621  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926629  471785 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0605 17:54:30.926636  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926644  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926654  471785 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e",
	I0605 17:54:30.926666  471785 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"
	I0605 17:54:30.926671  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926679  471785 command_runner.go:130] >       "size": "116138960",
	I0605 17:54:30.926684  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.926689  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.926696  471785 command_runner.go:130] >       },
	I0605 17:54:30.926701  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926706  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926713  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926718  471785 command_runner.go:130] >     },
	I0605 17:54:30.926726  471785 command_runner.go:130] >     {
	I0605 17:54:30.926734  471785 command_runner.go:130] >       "id": "2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4",
	I0605 17:54:30.926741  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926748  471785 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0605 17:54:30.926755  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926760  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926771  471785 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505",
	I0605 17:54:30.926784  471785 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0605 17:54:30.926792  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926798  471785 command_runner.go:130] >       "size": "108667702",
	I0605 17:54:30.926803  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.926811  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.926815  471785 command_runner.go:130] >       },
	I0605 17:54:30.926824  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926830  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926835  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926842  471785 command_runner.go:130] >     },
	I0605 17:54:30.926847  471785 command_runner.go:130] >     {
	I0605 17:54:30.926856  471785 command_runner.go:130] >       "id": "29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0",
	I0605 17:54:30.926864  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926871  471785 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0605 17:54:30.926878  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926883  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.926892  471785 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0605 17:54:30.926905  471785 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca"
	I0605 17:54:30.926910  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926918  471785 command_runner.go:130] >       "size": "68099991",
	I0605 17:54:30.926923  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.926928  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.926936  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.926941  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.926947  471785 command_runner.go:130] >     },
	I0605 17:54:30.926954  471785 command_runner.go:130] >     {
	I0605 17:54:30.926962  471785 command_runner.go:130] >       "id": "305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840",
	I0605 17:54:30.926971  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.926977  471785 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0605 17:54:30.926984  471785 command_runner.go:130] >       ],
	I0605 17:54:30.926989  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.927009  471785 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0605 17:54:30.927022  471785 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4"
	I0605 17:54:30.927028  471785 command_runner.go:130] >       ],
	I0605 17:54:30.927036  471785 command_runner.go:130] >       "size": "57615158",
	I0605 17:54:30.927041  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.927048  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.927053  471785 command_runner.go:130] >       },
	I0605 17:54:30.927060  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.927066  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.927073  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.927078  471785 command_runner.go:130] >     },
	I0605 17:54:30.927085  471785 command_runner.go:130] >     {
	I0605 17:54:30.927093  471785 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0605 17:54:30.927100  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.927107  471785 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0605 17:54:30.927114  471785 command_runner.go:130] >       ],
	I0605 17:54:30.927119  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.927129  471785 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0605 17:54:30.927142  471785 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0605 17:54:30.927147  471785 command_runner.go:130] >       ],
	I0605 17:54:30.927155  471785 command_runner.go:130] >       "size": "520014",
	I0605 17:54:30.927160  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.927169  471785 command_runner.go:130] >         "value": "65535"
	I0605 17:54:30.927173  471785 command_runner.go:130] >       },
	I0605 17:54:30.927179  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.927186  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.927191  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.927198  471785 command_runner.go:130] >     }
	I0605 17:54:30.927202  471785 command_runner.go:130] >   ]
	I0605 17:54:30.927207  471785 command_runner.go:130] > }
	I0605 17:54:30.927401  471785 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 17:54:30.927414  471785 crio.go:415] Images already preloaded, skipping extraction
	I0605 17:54:30.927471  471785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 17:54:30.969750  471785 command_runner.go:130] > {
	I0605 17:54:30.969771  471785 command_runner.go:130] >   "images": [
	I0605 17:54:30.969776  471785 command_runner.go:130] >     {
	I0605 17:54:30.969787  471785 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0605 17:54:30.969792  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.969800  471785 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0605 17:54:30.969804  471785 command_runner.go:130] >       ],
	I0605 17:54:30.969809  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.969820  471785 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0605 17:54:30.969830  471785 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0605 17:54:30.969839  471785 command_runner.go:130] >       ],
	I0605 17:54:30.969844  471785 command_runner.go:130] >       "size": "60881430",
	I0605 17:54:30.969850  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.969860  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.969865  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.969874  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.969878  471785 command_runner.go:130] >     },
	I0605 17:54:30.969883  471785 command_runner.go:130] >     {
	I0605 17:54:30.969893  471785 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0605 17:54:30.969898  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.969904  471785 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0605 17:54:30.969912  471785 command_runner.go:130] >       ],
	I0605 17:54:30.969921  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.969934  471785 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0605 17:54:30.969944  471785 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0605 17:54:30.969949  471785 command_runner.go:130] >       ],
	I0605 17:54:30.969958  471785 command_runner.go:130] >       "size": "29037500",
	I0605 17:54:30.969966  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.969971  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.969976  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.969981  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.969986  471785 command_runner.go:130] >     },
	I0605 17:54:30.969992  471785 command_runner.go:130] >     {
	I0605 17:54:30.970000  471785 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0605 17:54:30.970008  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970015  471785 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0605 17:54:30.970022  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970027  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970036  471785 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0605 17:54:30.970049  471785 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0605 17:54:30.970054  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970062  471785 command_runner.go:130] >       "size": "51393451",
	I0605 17:54:30.970067  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.970072  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970077  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970086  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970090  471785 command_runner.go:130] >     },
	I0605 17:54:30.970097  471785 command_runner.go:130] >     {
	I0605 17:54:30.970105  471785 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0605 17:54:30.970113  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970119  471785 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0605 17:54:30.970126  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970131  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970140  471785 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0605 17:54:30.970152  471785 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0605 17:54:30.970160  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970169  471785 command_runner.go:130] >       "size": "182283991",
	I0605 17:54:30.970174  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.970179  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.970186  471785 command_runner.go:130] >       },
	I0605 17:54:30.970191  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970198  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970203  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970211  471785 command_runner.go:130] >     },
	I0605 17:54:30.970215  471785 command_runner.go:130] >     {
	I0605 17:54:30.970223  471785 command_runner.go:130] >       "id": "72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae",
	I0605 17:54:30.970231  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970237  471785 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0605 17:54:30.970244  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970250  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970259  471785 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e",
	I0605 17:54:30.970272  471785 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"
	I0605 17:54:30.970280  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970286  471785 command_runner.go:130] >       "size": "116138960",
	I0605 17:54:30.970295  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.970300  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.970307  471785 command_runner.go:130] >       },
	I0605 17:54:30.970312  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970318  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970325  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970330  471785 command_runner.go:130] >     },
	I0605 17:54:30.970335  471785 command_runner.go:130] >     {
	I0605 17:54:30.970345  471785 command_runner.go:130] >       "id": "2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4",
	I0605 17:54:30.970350  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970357  471785 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0605 17:54:30.970364  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970369  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970379  471785 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505",
	I0605 17:54:30.970393  471785 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0605 17:54:30.970400  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970405  471785 command_runner.go:130] >       "size": "108667702",
	I0605 17:54:30.970412  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.970417  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.970424  471785 command_runner.go:130] >       },
	I0605 17:54:30.970431  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970439  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970444  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970451  471785 command_runner.go:130] >     },
	I0605 17:54:30.970456  471785 command_runner.go:130] >     {
	I0605 17:54:30.970466  471785 command_runner.go:130] >       "id": "29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0",
	I0605 17:54:30.970472  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970481  471785 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0605 17:54:30.970486  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970494  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970503  471785 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0605 17:54:30.970515  471785 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca"
	I0605 17:54:30.970520  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970525  471785 command_runner.go:130] >       "size": "68099991",
	I0605 17:54:30.970532  471785 command_runner.go:130] >       "uid": null,
	I0605 17:54:30.970539  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970547  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970552  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970556  471785 command_runner.go:130] >     },
	I0605 17:54:30.970563  471785 command_runner.go:130] >     {
	I0605 17:54:30.970571  471785 command_runner.go:130] >       "id": "305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840",
	I0605 17:54:30.970578  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970585  471785 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0605 17:54:30.970592  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970597  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970660  471785 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0605 17:54:30.970677  471785 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4"
	I0605 17:54:30.970682  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970689  471785 command_runner.go:130] >       "size": "57615158",
	I0605 17:54:30.970697  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.970702  471785 command_runner.go:130] >         "value": "0"
	I0605 17:54:30.970707  471785 command_runner.go:130] >       },
	I0605 17:54:30.970714  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970719  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970724  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970731  471785 command_runner.go:130] >     },
	I0605 17:54:30.970737  471785 command_runner.go:130] >     {
	I0605 17:54:30.970747  471785 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0605 17:54:30.970754  471785 command_runner.go:130] >       "repoTags": [
	I0605 17:54:30.970760  471785 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0605 17:54:30.970764  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970769  471785 command_runner.go:130] >       "repoDigests": [
	I0605 17:54:30.970781  471785 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0605 17:54:30.970791  471785 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0605 17:54:30.970799  471785 command_runner.go:130] >       ],
	I0605 17:54:30.970804  471785 command_runner.go:130] >       "size": "520014",
	I0605 17:54:30.970811  471785 command_runner.go:130] >       "uid": {
	I0605 17:54:30.970817  471785 command_runner.go:130] >         "value": "65535"
	I0605 17:54:30.970824  471785 command_runner.go:130] >       },
	I0605 17:54:30.970829  471785 command_runner.go:130] >       "username": "",
	I0605 17:54:30.970834  471785 command_runner.go:130] >       "spec": null,
	I0605 17:54:30.970841  471785 command_runner.go:130] >       "pinned": false
	I0605 17:54:30.970846  471785 command_runner.go:130] >     }
	I0605 17:54:30.970850  471785 command_runner.go:130] >   ]
	I0605 17:54:30.970854  471785 command_runner.go:130] > }
	I0605 17:54:30.973870  471785 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 17:54:30.973891  471785 cache_images.go:84] Images are preloaded, skipping loading
	I0605 17:54:30.973968  471785 ssh_runner.go:195] Run: crio config
	I0605 17:54:31.029481  471785 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0605 17:54:31.029504  471785 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0605 17:54:31.029513  471785 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0605 17:54:31.029517  471785 command_runner.go:130] > #
	I0605 17:54:31.029527  471785 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0605 17:54:31.029534  471785 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0605 17:54:31.029549  471785 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0605 17:54:31.029573  471785 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0605 17:54:31.029581  471785 command_runner.go:130] > # reload'.
	I0605 17:54:31.029589  471785 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0605 17:54:31.029597  471785 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0605 17:54:31.029607  471785 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0605 17:54:31.029615  471785 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0605 17:54:31.029624  471785 command_runner.go:130] > [crio]
	I0605 17:54:31.029631  471785 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0605 17:54:31.029637  471785 command_runner.go:130] > # containers images, in this directory.
	I0605 17:54:31.029648  471785 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0605 17:54:31.029657  471785 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0605 17:54:31.029667  471785 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0605 17:54:31.029675  471785 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0605 17:54:31.029685  471785 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0605 17:54:31.029691  471785 command_runner.go:130] > # storage_driver = "vfs"
	I0605 17:54:31.029702  471785 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0605 17:54:31.029710  471785 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0605 17:54:31.029718  471785 command_runner.go:130] > # storage_option = [
	I0605 17:54:31.029722  471785 command_runner.go:130] > # ]
	I0605 17:54:31.029730  471785 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0605 17:54:31.029741  471785 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0605 17:54:31.029747  471785 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0605 17:54:31.029757  471785 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0605 17:54:31.029765  471785 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0605 17:54:31.029773  471785 command_runner.go:130] > # always happen on a node reboot
	I0605 17:54:31.029779  471785 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0605 17:54:31.029791  471785 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0605 17:54:31.029799  471785 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0605 17:54:31.029812  471785 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0605 17:54:31.029818  471785 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0605 17:54:31.029827  471785 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0605 17:54:31.029836  471785 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0605 17:54:31.029841  471785 command_runner.go:130] > # internal_wipe = true
	I0605 17:54:31.029852  471785 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0605 17:54:31.029859  471785 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0605 17:54:31.029867  471785 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0605 17:54:31.029873  471785 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0605 17:54:31.029883  471785 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0605 17:54:31.029892  471785 command_runner.go:130] > [crio.api]
	I0605 17:54:31.029899  471785 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0605 17:54:31.029905  471785 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0605 17:54:31.029916  471785 command_runner.go:130] > # IP address on which the stream server will listen.
	I0605 17:54:31.029958  471785 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0605 17:54:31.029970  471785 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0605 17:54:31.029976  471785 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0605 17:54:31.029981  471785 command_runner.go:130] > # stream_port = "0"
	I0605 17:54:31.029987  471785 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0605 17:54:31.029992  471785 command_runner.go:130] > # stream_enable_tls = false
	I0605 17:54:31.030004  471785 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0605 17:54:31.030011  471785 command_runner.go:130] > # stream_idle_timeout = ""
	I0605 17:54:31.030025  471785 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0605 17:54:31.030033  471785 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0605 17:54:31.030041  471785 command_runner.go:130] > # minutes.
	I0605 17:54:31.030045  471785 command_runner.go:130] > # stream_tls_cert = ""
	I0605 17:54:31.030053  471785 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0605 17:54:31.030064  471785 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0605 17:54:31.030069  471785 command_runner.go:130] > # stream_tls_key = ""
	I0605 17:54:31.030080  471785 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0605 17:54:31.030088  471785 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0605 17:54:31.030098  471785 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0605 17:54:31.030103  471785 command_runner.go:130] > # stream_tls_ca = ""
	I0605 17:54:31.030118  471785 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0605 17:54:31.030125  471785 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0605 17:54:31.030138  471785 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0605 17:54:31.030144  471785 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0605 17:54:31.030160  471785 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0605 17:54:31.030172  471785 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0605 17:54:31.030177  471785 command_runner.go:130] > [crio.runtime]
	I0605 17:54:31.030189  471785 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0605 17:54:31.030196  471785 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0605 17:54:31.030205  471785 command_runner.go:130] > # "nofile=1024:2048"
	I0605 17:54:31.030212  471785 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0605 17:54:31.030329  471785 command_runner.go:130] > # default_ulimits = [
	I0605 17:54:31.032017  471785 command_runner.go:130] > # ]
	I0605 17:54:31.032037  471785 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0605 17:54:31.032076  471785 command_runner.go:130] > # no_pivot = false
	I0605 17:54:31.032094  471785 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0605 17:54:31.032103  471785 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0605 17:54:31.032113  471785 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0605 17:54:31.032121  471785 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0605 17:54:31.032130  471785 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0605 17:54:31.032159  471785 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0605 17:54:31.032175  471785 command_runner.go:130] > # conmon = ""
	I0605 17:54:31.032187  471785 command_runner.go:130] > # Cgroup setting for conmon
	I0605 17:54:31.032197  471785 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0605 17:54:31.032205  471785 command_runner.go:130] > conmon_cgroup = "pod"
	I0605 17:54:31.032213  471785 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0605 17:54:31.032237  471785 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0605 17:54:31.032264  471785 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0605 17:54:31.032278  471785 command_runner.go:130] > # conmon_env = [
	I0605 17:54:31.032286  471785 command_runner.go:130] > # ]
	I0605 17:54:31.032293  471785 command_runner.go:130] > # Additional environment variables to set for all the
	I0605 17:54:31.032328  471785 command_runner.go:130] > # containers. These are overridden if set in the
	I0605 17:54:31.032341  471785 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0605 17:54:31.032350  471785 command_runner.go:130] > # default_env = [
	I0605 17:54:31.032354  471785 command_runner.go:130] > # ]
	I0605 17:54:31.032365  471785 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0605 17:54:31.032385  471785 command_runner.go:130] > # selinux = false
	I0605 17:54:31.032406  471785 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0605 17:54:31.032421  471785 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0605 17:54:31.032432  471785 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0605 17:54:31.032440  471785 command_runner.go:130] > # seccomp_profile = ""
	I0605 17:54:31.032447  471785 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0605 17:54:31.032480  471785 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0605 17:54:31.032495  471785 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0605 17:54:31.032506  471785 command_runner.go:130] > # which might increase security.
	I0605 17:54:31.032515  471785 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0605 17:54:31.032541  471785 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0605 17:54:31.032565  471785 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0605 17:54:31.032580  471785 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0605 17:54:31.032593  471785 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0605 17:54:31.032615  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:54:31.032639  471785 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0605 17:54:31.032656  471785 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0605 17:54:31.032666  471785 command_runner.go:130] > # the cgroup blockio controller.
	I0605 17:54:31.032676  471785 command_runner.go:130] > # blockio_config_file = ""
	I0605 17:54:31.032711  471785 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0605 17:54:31.032724  471785 command_runner.go:130] > # irqbalance daemon.
	I0605 17:54:31.032734  471785 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0605 17:54:31.032746  471785 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0605 17:54:31.032806  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:54:31.032814  471785 command_runner.go:130] > # rdt_config_file = ""
	I0605 17:54:31.032821  471785 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0605 17:54:31.032837  471785 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0605 17:54:31.032845  471785 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0605 17:54:31.032851  471785 command_runner.go:130] > # separate_pull_cgroup = ""
	I0605 17:54:31.032867  471785 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0605 17:54:31.032881  471785 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0605 17:54:31.032886  471785 command_runner.go:130] > # will be added.
	I0605 17:54:31.032898  471785 command_runner.go:130] > # default_capabilities = [
	I0605 17:54:31.032919  471785 command_runner.go:130] > # 	"CHOWN",
	I0605 17:54:31.032931  471785 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0605 17:54:31.032936  471785 command_runner.go:130] > # 	"FSETID",
	I0605 17:54:31.032941  471785 command_runner.go:130] > # 	"FOWNER",
	I0605 17:54:31.032945  471785 command_runner.go:130] > # 	"SETGID",
	I0605 17:54:31.032958  471785 command_runner.go:130] > # 	"SETUID",
	I0605 17:54:31.032971  471785 command_runner.go:130] > # 	"SETPCAP",
	I0605 17:54:31.032977  471785 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0605 17:54:31.032999  471785 command_runner.go:130] > # 	"KILL",
	I0605 17:54:31.033010  471785 command_runner.go:130] > # ]
	I0605 17:54:31.033031  471785 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0605 17:54:31.033048  471785 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0605 17:54:31.033058  471785 command_runner.go:130] > # add_inheritable_capabilities = true
	I0605 17:54:31.033078  471785 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0605 17:54:31.033092  471785 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0605 17:54:31.033112  471785 command_runner.go:130] > # default_sysctls = [
	I0605 17:54:31.033122  471785 command_runner.go:130] > # ]
	I0605 17:54:31.033128  471785 command_runner.go:130] > # List of devices on the host that a
	I0605 17:54:31.033152  471785 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0605 17:54:31.033164  471785 command_runner.go:130] > # allowed_devices = [
	I0605 17:54:31.033179  471785 command_runner.go:130] > # 	"/dev/fuse",
	I0605 17:54:31.033190  471785 command_runner.go:130] > # ]
	I0605 17:54:31.033197  471785 command_runner.go:130] > # List of additional devices. specified as
	I0605 17:54:31.033231  471785 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0605 17:54:31.033252  471785 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0605 17:54:31.033268  471785 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0605 17:54:31.033282  471785 command_runner.go:130] > # additional_devices = [
	I0605 17:54:31.033302  471785 command_runner.go:130] > # ]
	I0605 17:54:31.033316  471785 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0605 17:54:31.033337  471785 command_runner.go:130] > # cdi_spec_dirs = [
	I0605 17:54:31.033346  471785 command_runner.go:130] > # 	"/etc/cdi",
	I0605 17:54:31.033351  471785 command_runner.go:130] > # 	"/var/run/cdi",
	I0605 17:54:31.033360  471785 command_runner.go:130] > # ]
	I0605 17:54:31.033380  471785 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0605 17:54:31.033404  471785 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0605 17:54:31.033415  471785 command_runner.go:130] > # Defaults to false.
	I0605 17:54:31.033426  471785 command_runner.go:130] > # device_ownership_from_security_context = false
	I0605 17:54:31.033437  471785 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0605 17:54:31.033461  471785 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0605 17:54:31.033472  471785 command_runner.go:130] > # hooks_dir = [
	I0605 17:54:31.033489  471785 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0605 17:54:31.033500  471785 command_runner.go:130] > # ]
	I0605 17:54:31.033508  471785 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0605 17:54:31.033532  471785 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0605 17:54:31.033554  471785 command_runner.go:130] > # its default mounts from the following two files:
	I0605 17:54:31.033565  471785 command_runner.go:130] > #
	I0605 17:54:31.033573  471785 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0605 17:54:31.033585  471785 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0605 17:54:31.033595  471785 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0605 17:54:31.033617  471785 command_runner.go:130] > #
	I0605 17:54:31.033636  471785 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0605 17:54:31.033650  471785 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0605 17:54:31.033662  471785 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0605 17:54:31.033671  471785 command_runner.go:130] > #      only add mounts it finds in this file.
	I0605 17:54:31.033707  471785 command_runner.go:130] > #
	I0605 17:54:31.033719  471785 command_runner.go:130] > # default_mounts_file = ""
	I0605 17:54:31.033729  471785 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0605 17:54:31.033740  471785 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0605 17:54:31.033762  471785 command_runner.go:130] > # pids_limit = 0
	I0605 17:54:31.033784  471785 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0605 17:54:31.033797  471785 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0605 17:54:31.033808  471785 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0605 17:54:31.033821  471785 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0605 17:54:31.033846  471785 command_runner.go:130] > # log_size_max = -1
	I0605 17:54:31.033870  471785 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0605 17:54:31.033894  471785 command_runner.go:130] > # log_to_journald = false
	I0605 17:54:31.033920  471785 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0605 17:54:31.033933  471785 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0605 17:54:31.033955  471785 command_runner.go:130] > # Path to directory for container attach sockets.
	I0605 17:54:31.033968  471785 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0605 17:54:31.033979  471785 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0605 17:54:31.034003  471785 command_runner.go:130] > # bind_mount_prefix = ""
	I0605 17:54:31.034026  471785 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0605 17:54:31.034031  471785 command_runner.go:130] > # read_only = false
	I0605 17:54:31.034038  471785 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0605 17:54:31.034046  471785 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0605 17:54:31.034051  471785 command_runner.go:130] > # live configuration reload.
	I0605 17:54:31.034056  471785 command_runner.go:130] > # log_level = "info"
	I0605 17:54:31.034078  471785 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0605 17:54:31.034084  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:54:31.034143  471785 command_runner.go:130] > # log_filter = ""
	I0605 17:54:31.034152  471785 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0605 17:54:31.034160  471785 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0605 17:54:31.034165  471785 command_runner.go:130] > # separated by comma.
	I0605 17:54:31.034170  471785 command_runner.go:130] > # uid_mappings = ""
	I0605 17:54:31.034177  471785 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0605 17:54:31.034184  471785 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0605 17:54:31.034189  471785 command_runner.go:130] > # separated by comma.
	I0605 17:54:31.034194  471785 command_runner.go:130] > # gid_mappings = ""
	I0605 17:54:31.034201  471785 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0605 17:54:31.034210  471785 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0605 17:54:31.034223  471785 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0605 17:54:31.034229  471785 command_runner.go:130] > # minimum_mappable_uid = -1
	I0605 17:54:31.034236  471785 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0605 17:54:31.034243  471785 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0605 17:54:31.034251  471785 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0605 17:54:31.034257  471785 command_runner.go:130] > # minimum_mappable_gid = -1
	I0605 17:54:31.034264  471785 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0605 17:54:31.034271  471785 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0605 17:54:31.034278  471785 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0605 17:54:31.034283  471785 command_runner.go:130] > # ctr_stop_timeout = 30
	I0605 17:54:31.034297  471785 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0605 17:54:31.034304  471785 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0605 17:54:31.034314  471785 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0605 17:54:31.034320  471785 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0605 17:54:31.034325  471785 command_runner.go:130] > # drop_infra_ctr = true
	I0605 17:54:31.034333  471785 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0605 17:54:31.034339  471785 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0605 17:54:31.034348  471785 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0605 17:54:31.034355  471785 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0605 17:54:31.034369  471785 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0605 17:54:31.034376  471785 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0605 17:54:31.034381  471785 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0605 17:54:31.034389  471785 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0605 17:54:31.034394  471785 command_runner.go:130] > # pinns_path = ""
	I0605 17:54:31.034401  471785 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0605 17:54:31.034409  471785 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0605 17:54:31.034433  471785 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0605 17:54:31.034452  471785 command_runner.go:130] > # default_runtime = "runc"
	I0605 17:54:31.034459  471785 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0605 17:54:31.034468  471785 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0605 17:54:31.034479  471785 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0605 17:54:31.034529  471785 command_runner.go:130] > # creation as a file is not desired either.
	I0605 17:54:31.034570  471785 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0605 17:54:31.034576  471785 command_runner.go:130] > # the hostname is being managed dynamically.
	I0605 17:54:31.034586  471785 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0605 17:54:31.034605  471785 command_runner.go:130] > # ]
	I0605 17:54:31.034619  471785 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0605 17:54:31.034642  471785 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0605 17:54:31.034655  471785 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0605 17:54:31.034666  471785 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0605 17:54:31.034684  471785 command_runner.go:130] > #
	I0605 17:54:31.034696  471785 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0605 17:54:31.034717  471785 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0605 17:54:31.034728  471785 command_runner.go:130] > #  runtime_type = "oci"
	I0605 17:54:31.034740  471785 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0605 17:54:31.034762  471785 command_runner.go:130] > #  privileged_without_host_devices = false
	I0605 17:54:31.034774  471785 command_runner.go:130] > #  allowed_annotations = []
	I0605 17:54:31.034793  471785 command_runner.go:130] > # Where:
	I0605 17:54:31.034806  471785 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0605 17:54:31.034820  471785 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0605 17:54:31.034847  471785 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0605 17:54:31.034872  471785 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0605 17:54:31.034883  471785 command_runner.go:130] > #   in $PATH.
	I0605 17:54:31.034891  471785 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0605 17:54:31.034912  471785 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0605 17:54:31.034934  471785 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0605 17:54:31.034946  471785 command_runner.go:130] > #   state.
	I0605 17:54:31.034955  471785 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0605 17:54:31.034967  471785 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0605 17:54:31.034990  471785 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0605 17:54:31.035027  471785 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0605 17:54:31.035036  471785 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0605 17:54:31.035044  471785 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0605 17:54:31.035050  471785 command_runner.go:130] > #   The currently recognized values are:
	I0605 17:54:31.035067  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0605 17:54:31.035085  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0605 17:54:31.035099  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0605 17:54:31.035107  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0605 17:54:31.035116  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0605 17:54:31.035124  471785 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0605 17:54:31.035145  471785 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0605 17:54:31.035162  471785 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0605 17:54:31.035178  471785 command_runner.go:130] > #   should be moved to the container's cgroup
	I0605 17:54:31.035190  471785 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0605 17:54:31.035197  471785 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0605 17:54:31.035220  471785 command_runner.go:130] > runtime_type = "oci"
	I0605 17:54:31.035231  471785 command_runner.go:130] > runtime_root = "/run/runc"
	I0605 17:54:31.035238  471785 command_runner.go:130] > runtime_config_path = ""
	I0605 17:54:31.035257  471785 command_runner.go:130] > monitor_path = ""
	I0605 17:54:31.035263  471785 command_runner.go:130] > monitor_cgroup = ""
	I0605 17:54:31.035272  471785 command_runner.go:130] > monitor_exec_cgroup = ""
	I0605 17:54:31.035322  471785 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0605 17:54:31.035336  471785 command_runner.go:130] > # running containers
	I0605 17:54:31.035341  471785 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0605 17:54:31.035382  471785 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0605 17:54:31.035403  471785 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0605 17:54:31.035411  471785 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0605 17:54:31.035417  471785 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0605 17:54:31.035430  471785 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0605 17:54:31.035465  471785 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0605 17:54:31.035473  471785 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0605 17:54:31.035481  471785 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0605 17:54:31.035487  471785 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0605 17:54:31.035496  471785 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0605 17:54:31.035504  471785 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0605 17:54:31.035541  471785 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0605 17:54:31.035559  471785 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0605 17:54:31.035571  471785 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0605 17:54:31.035582  471785 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0605 17:54:31.035593  471785 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0605 17:54:31.035616  471785 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0605 17:54:31.035631  471785 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0605 17:54:31.035650  471785 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0605 17:54:31.035661  471785 command_runner.go:130] > # Example:
	I0605 17:54:31.035667  471785 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0605 17:54:31.035673  471785 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0605 17:54:31.035693  471785 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0605 17:54:31.035709  471785 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0605 17:54:31.035725  471785 command_runner.go:130] > # cpuset = 0
	I0605 17:54:31.035736  471785 command_runner.go:130] > # cpushares = "0-1"
	I0605 17:54:31.035741  471785 command_runner.go:130] > # Where:
	I0605 17:54:31.035747  471785 command_runner.go:130] > # The workload name is workload-type.
	I0605 17:54:31.035771  471785 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0605 17:54:31.035787  471785 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0605 17:54:31.035794  471785 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0605 17:54:31.035813  471785 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0605 17:54:31.035849  471785 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0605 17:54:31.035855  471785 command_runner.go:130] > # 
	I0605 17:54:31.035863  471785 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0605 17:54:31.035867  471785 command_runner.go:130] > #
	I0605 17:54:31.035877  471785 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0605 17:54:31.035885  471785 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0605 17:54:31.035893  471785 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0605 17:54:31.035928  471785 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0605 17:54:31.035954  471785 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0605 17:54:31.035974  471785 command_runner.go:130] > [crio.image]
	I0605 17:54:31.035984  471785 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0605 17:54:31.035989  471785 command_runner.go:130] > # default_transport = "docker://"
	I0605 17:54:31.036024  471785 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0605 17:54:31.036053  471785 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0605 17:54:31.036066  471785 command_runner.go:130] > # global_auth_file = ""
	I0605 17:54:31.036072  471785 command_runner.go:130] > # The image used to instantiate infra containers.
	I0605 17:54:31.036084  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:54:31.036091  471785 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0605 17:54:31.036103  471785 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0605 17:54:31.036110  471785 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0605 17:54:31.036133  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:54:31.036143  471785 command_runner.go:130] > # pause_image_auth_file = ""
	I0605 17:54:31.036151  471785 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0605 17:54:31.036165  471785 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0605 17:54:31.036174  471785 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0605 17:54:31.036184  471785 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0605 17:54:31.036190  471785 command_runner.go:130] > # pause_command = "/pause"
	I0605 17:54:31.036211  471785 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0605 17:54:31.036220  471785 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0605 17:54:31.036237  471785 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0605 17:54:31.036252  471785 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0605 17:54:31.036259  471785 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0605 17:54:31.036264  471785 command_runner.go:130] > # signature_policy = ""
	I0605 17:54:31.036275  471785 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0605 17:54:31.036283  471785 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0605 17:54:31.036291  471785 command_runner.go:130] > # changing them here.
	I0605 17:54:31.036297  471785 command_runner.go:130] > # insecure_registries = [
	I0605 17:54:31.036317  471785 command_runner.go:130] > # ]
	I0605 17:54:31.036331  471785 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0605 17:54:31.036348  471785 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0605 17:54:31.036364  471785 command_runner.go:130] > # image_volumes = "mkdir"
	I0605 17:54:31.036374  471785 command_runner.go:130] > # Temporary directory to use for storing big files
	I0605 17:54:31.036380  471785 command_runner.go:130] > # big_files_temporary_dir = ""
	I0605 17:54:31.036390  471785 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0605 17:54:31.036396  471785 command_runner.go:130] > # CNI plugins.
	I0605 17:54:31.036403  471785 command_runner.go:130] > [crio.network]
	I0605 17:54:31.036410  471785 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0605 17:54:31.036433  471785 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0605 17:54:31.036450  471785 command_runner.go:130] > # cni_default_network = ""
	I0605 17:54:31.036463  471785 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0605 17:54:31.036469  471785 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0605 17:54:31.036479  471785 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0605 17:54:31.036485  471785 command_runner.go:130] > # plugin_dirs = [
	I0605 17:54:31.036492  471785 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0605 17:54:31.036497  471785 command_runner.go:130] > # ]
	I0605 17:54:31.036504  471785 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0605 17:54:31.036524  471785 command_runner.go:130] > [crio.metrics]
	I0605 17:54:31.036537  471785 command_runner.go:130] > # Globally enable or disable metrics support.
	I0605 17:54:31.036553  471785 command_runner.go:130] > # enable_metrics = false
	I0605 17:54:31.036566  471785 command_runner.go:130] > # Specify enabled metrics collectors.
	I0605 17:54:31.036572  471785 command_runner.go:130] > # Per default all metrics are enabled.
	I0605 17:54:31.036582  471785 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0605 17:54:31.036591  471785 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0605 17:54:31.036602  471785 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0605 17:54:31.036608  471785 command_runner.go:130] > # metrics_collectors = [
	I0605 17:54:31.036615  471785 command_runner.go:130] > # 	"operations",
	I0605 17:54:31.036637  471785 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0605 17:54:31.036653  471785 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0605 17:54:31.036665  471785 command_runner.go:130] > # 	"operations_errors",
	I0605 17:54:31.036670  471785 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0605 17:54:31.036678  471785 command_runner.go:130] > # 	"image_pulls_by_name",
	I0605 17:54:31.036697  471785 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0605 17:54:31.036711  471785 command_runner.go:130] > # 	"image_pulls_failures",
	I0605 17:54:31.036724  471785 command_runner.go:130] > # 	"image_pulls_successes",
	I0605 17:54:31.036737  471785 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0605 17:54:31.036743  471785 command_runner.go:130] > # 	"image_layer_reuse",
	I0605 17:54:31.036751  471785 command_runner.go:130] > # 	"containers_oom_total",
	I0605 17:54:31.036756  471785 command_runner.go:130] > # 	"containers_oom",
	I0605 17:54:31.036764  471785 command_runner.go:130] > # 	"processes_defunct",
	I0605 17:54:31.036769  471785 command_runner.go:130] > # 	"operations_total",
	I0605 17:54:31.036776  471785 command_runner.go:130] > # 	"operations_latency_seconds",
	I0605 17:54:31.036785  471785 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0605 17:54:31.036791  471785 command_runner.go:130] > # 	"operations_errors_total",
	I0605 17:54:31.036813  471785 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0605 17:54:31.036829  471785 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0605 17:54:31.036845  471785 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0605 17:54:31.036851  471785 command_runner.go:130] > # 	"image_pulls_success_total",
	I0605 17:54:31.036859  471785 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0605 17:54:31.036865  471785 command_runner.go:130] > # 	"containers_oom_count_total",
	I0605 17:54:31.036871  471785 command_runner.go:130] > # ]
	I0605 17:54:31.036878  471785 command_runner.go:130] > # The port on which the metrics server will listen.
	I0605 17:54:31.036886  471785 command_runner.go:130] > # metrics_port = 9090
	I0605 17:54:31.036901  471785 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0605 17:54:31.036918  471785 command_runner.go:130] > # metrics_socket = ""
	I0605 17:54:31.036933  471785 command_runner.go:130] > # The certificate for the secure metrics server.
	I0605 17:54:31.036946  471785 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0605 17:54:31.036955  471785 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0605 17:54:31.036965  471785 command_runner.go:130] > # certificate on any modification event.
	I0605 17:54:31.036970  471785 command_runner.go:130] > # metrics_cert = ""
	I0605 17:54:31.036981  471785 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0605 17:54:31.036988  471785 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0605 17:54:31.036995  471785 command_runner.go:130] > # metrics_key = ""
	I0605 17:54:31.037013  471785 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0605 17:54:31.037023  471785 command_runner.go:130] > [crio.tracing]
	I0605 17:54:31.037038  471785 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0605 17:54:31.037049  471785 command_runner.go:130] > # enable_tracing = false
	I0605 17:54:31.037056  471785 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0605 17:54:31.037066  471785 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0605 17:54:31.037076  471785 command_runner.go:130] > # Number of samples to collect per million spans.
	I0605 17:54:31.037086  471785 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0605 17:54:31.037093  471785 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0605 17:54:31.037101  471785 command_runner.go:130] > [crio.stats]
	I0605 17:54:31.037118  471785 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0605 17:54:31.037130  471785 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0605 17:54:31.037145  471785 command_runner.go:130] > # stats_collection_period = 0
	I0605 17:54:31.037190  471785 command_runner.go:130] ! time="2023-06-05 17:54:31.026582267Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0605 17:54:31.037222  471785 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0605 17:54:31.037319  471785 cni.go:84] Creating CNI manager for ""
	I0605 17:54:31.037334  471785 cni.go:136] 1 nodes found, recommending kindnet
	I0605 17:54:31.037353  471785 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 17:54:31.037377  471785 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-292850 NodeName:multinode-292850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 17:54:31.037570  471785 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-292850"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 17:54:31.037653  471785 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-292850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 17:54:31.037746  471785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0605 17:54:31.049429  471785 command_runner.go:130] > kubeadm
	I0605 17:54:31.049449  471785 command_runner.go:130] > kubectl
	I0605 17:54:31.049454  471785 command_runner.go:130] > kubelet
	I0605 17:54:31.049481  471785 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 17:54:31.049550  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 17:54:31.061014  471785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0605 17:54:31.086063  471785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 17:54:31.111301  471785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0605 17:54:31.135492  471785 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0605 17:54:31.140738  471785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:54:31.155868  471785 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850 for IP: 192.168.58.2
	I0605 17:54:31.155904  471785 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.156126  471785 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 17:54:31.156196  471785 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 17:54:31.156259  471785 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key
	I0605 17:54:31.156274  471785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt with IP's: []
	I0605 17:54:31.418923  471785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt ...
	I0605 17:54:31.418960  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt: {Name:mkc0c643228bb4a8c96fe98efd138cb587b03ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.419602  471785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key ...
	I0605 17:54:31.419621  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key: {Name:mke7b2c51463b483c108321647a189b7a911d76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.420369  471785 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key.cee25041
	I0605 17:54:31.420393  471785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0605 17:54:31.719862  471785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt.cee25041 ...
	I0605 17:54:31.719896  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt.cee25041: {Name:mk8fcedd9341bba949d7d21c468037d59a3d6a91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.720647  471785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key.cee25041 ...
	I0605 17:54:31.720664  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key.cee25041: {Name:mkef8e9aaf9be3715a2163f278a4be5f09fe0df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.721367  471785 certs.go:337] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt
	I0605 17:54:31.721464  471785 certs.go:341] copying /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key
	I0605 17:54:31.721535  471785 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.key
	I0605 17:54:31.721553  471785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.crt with IP's: []
	I0605 17:54:31.937096  471785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.crt ...
	I0605 17:54:31.937137  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.crt: {Name:mk072dd68e3992a4ffff9ffd39d7c27b0b844742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.937334  471785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.key ...
	I0605 17:54:31.937349  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.key: {Name:mkf3529dc67bfe23e056f970f81faf55b8f17dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:54:31.938033  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0605 17:54:31.938065  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0605 17:54:31.938080  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0605 17:54:31.938092  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0605 17:54:31.938103  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0605 17:54:31.938121  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0605 17:54:31.938136  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0605 17:54:31.938154  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0605 17:54:31.938253  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem (1338 bytes)
	W0605 17:54:31.938297  471785 certs.go:433] ignoring /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813_empty.pem, impossibly tiny 0 bytes
	I0605 17:54:31.938315  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 17:54:31.938347  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 17:54:31.938375  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 17:54:31.938409  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 17:54:31.938458  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 17:54:31.938489  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem -> /usr/share/ca-certificates/407813.pem
	I0605 17:54:31.938504  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> /usr/share/ca-certificates/4078132.pem
	I0605 17:54:31.938520  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:54:31.939089  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 17:54:31.969868  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0605 17:54:32.004862  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 17:54:32.037735  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0605 17:54:32.067995  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 17:54:32.098883  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 17:54:32.129590  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 17:54:32.160261  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 17:54:32.190533  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem --> /usr/share/ca-certificates/407813.pem (1338 bytes)
	I0605 17:54:32.222068  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /usr/share/ca-certificates/4078132.pem (1708 bytes)
	I0605 17:54:32.254177  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 17:54:32.284850  471785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 17:54:32.307262  471785 ssh_runner.go:195] Run: openssl version
	I0605 17:54:32.315019  471785 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0605 17:54:32.315126  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4078132.pem && ln -fs /usr/share/ca-certificates/4078132.pem /etc/ssl/certs/4078132.pem"
	I0605 17:54:32.327714  471785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4078132.pem
	I0605 17:54:32.332761  471785 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 17:54:32.332842  471785 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 17:54:32.332922  471785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4078132.pem
	I0605 17:54:32.342098  471785 command_runner.go:130] > 3ec20f2e
	I0605 17:54:32.342200  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4078132.pem /etc/ssl/certs/3ec20f2e.0"
	I0605 17:54:32.354729  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 17:54:32.366684  471785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:54:32.371571  471785 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:54:32.371600  471785 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:54:32.371673  471785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:54:32.380550  471785 command_runner.go:130] > b5213941
	I0605 17:54:32.380997  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 17:54:32.393622  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407813.pem && ln -fs /usr/share/ca-certificates/407813.pem /etc/ssl/certs/407813.pem"
	I0605 17:54:32.406107  471785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407813.pem
	I0605 17:54:32.411139  471785 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 17:54:32.411238  471785 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 17:54:32.411321  471785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407813.pem
	I0605 17:54:32.420818  471785 command_runner.go:130] > 51391683
	I0605 17:54:32.421305  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407813.pem /etc/ssl/certs/51391683.0"
	I0605 17:54:32.433504  471785 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 17:54:32.438382  471785 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:54:32.438425  471785 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:54:32.438490  471785 kubeadm.go:404] StartCluster: {Name:multinode-292850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:54:32.438591  471785 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 17:54:32.438653  471785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 17:54:32.490555  471785 cri.go:88] found id: ""
	I0605 17:54:32.490673  471785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0605 17:54:32.502126  471785 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0605 17:54:32.502155  471785 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0605 17:54:32.502164  471785 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0605 17:54:32.502238  471785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0605 17:54:32.513822  471785 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0605 17:54:32.513942  471785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0605 17:54:32.525448  471785 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0605 17:54:32.525471  471785 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0605 17:54:32.525481  471785 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0605 17:54:32.525489  471785 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0605 17:54:32.525537  471785 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0605 17:54:32.525572  471785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0605 17:54:32.578102  471785 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0605 17:54:32.578131  471785 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
	I0605 17:54:32.578356  471785 kubeadm.go:322] [preflight] Running pre-flight checks
	I0605 17:54:32.578373  471785 command_runner.go:130] > [preflight] Running pre-flight checks
	I0605 17:54:32.625978  471785 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0605 17:54:32.626005  471785 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0605 17:54:32.626057  471785 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-aws
	I0605 17:54:32.626068  471785 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-aws
	I0605 17:54:32.626100  471785 kubeadm.go:322] OS: Linux
	I0605 17:54:32.626109  471785 command_runner.go:130] > OS: Linux
	I0605 17:54:32.626151  471785 kubeadm.go:322] CGROUPS_CPU: enabled
	I0605 17:54:32.626160  471785 command_runner.go:130] > CGROUPS_CPU: enabled
	I0605 17:54:32.626205  471785 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0605 17:54:32.626212  471785 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0605 17:54:32.626264  471785 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0605 17:54:32.626277  471785 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0605 17:54:32.626321  471785 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0605 17:54:32.626330  471785 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0605 17:54:32.626374  471785 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0605 17:54:32.626383  471785 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0605 17:54:32.626427  471785 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0605 17:54:32.626436  471785 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0605 17:54:32.626478  471785 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0605 17:54:32.626486  471785 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0605 17:54:32.626530  471785 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0605 17:54:32.626538  471785 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0605 17:54:32.626586  471785 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0605 17:54:32.626593  471785 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0605 17:54:32.706274  471785 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0605 17:54:32.706305  471785 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0605 17:54:32.706396  471785 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0605 17:54:32.706406  471785 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0605 17:54:32.706492  471785 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0605 17:54:32.706500  471785 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0605 17:54:32.976331  471785 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0605 17:54:32.982233  471785 out.go:204]   - Generating certificates and keys ...
	I0605 17:54:32.976648  471785 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0605 17:54:32.982439  471785 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0605 17:54:32.982478  471785 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0605 17:54:32.982568  471785 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0605 17:54:32.982593  471785 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0605 17:54:33.136521  471785 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0605 17:54:33.136605  471785 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0605 17:54:33.486356  471785 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0605 17:54:33.486382  471785 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0605 17:54:33.610876  471785 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0605 17:54:33.610911  471785 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0605 17:54:33.965920  471785 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0605 17:54:33.965946  471785 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0605 17:54:34.468001  471785 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0605 17:54:34.468031  471785 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0605 17:54:34.468194  471785 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-292850] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0605 17:54:34.468213  471785 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-292850] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0605 17:54:35.421181  471785 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0605 17:54:35.421206  471785 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0605 17:54:35.421580  471785 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-292850] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0605 17:54:35.421594  471785 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-292850] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0605 17:54:35.864658  471785 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0605 17:54:35.864685  471785 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0605 17:54:36.510280  471785 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0605 17:54:36.510309  471785 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0605 17:54:36.683853  471785 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0605 17:54:36.683878  471785 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0605 17:54:36.684006  471785 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0605 17:54:36.684021  471785 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0605 17:54:37.656229  471785 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0605 17:54:37.656257  471785 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0605 17:54:38.284811  471785 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0605 17:54:38.284836  471785 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0605 17:54:38.599631  471785 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0605 17:54:38.599657  471785 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0605 17:54:38.906713  471785 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0605 17:54:38.906743  471785 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0605 17:54:38.918288  471785 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 17:54:38.918319  471785 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 17:54:38.919301  471785 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 17:54:38.919324  471785 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 17:54:38.919618  471785 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0605 17:54:38.919636  471785 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0605 17:54:39.026225  471785 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0605 17:54:39.029172  471785 out.go:204]   - Booting up control plane ...
	I0605 17:54:39.026333  471785 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0605 17:54:39.029303  471785 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0605 17:54:39.029315  471785 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0605 17:54:39.031324  471785 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0605 17:54:39.031349  471785 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0605 17:54:39.038395  471785 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0605 17:54:39.038425  471785 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0605 17:54:39.044420  471785 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0605 17:54:39.044447  471785 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0605 17:54:39.044592  471785 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0605 17:54:39.044598  471785 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0605 17:54:46.546683  471785 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502683 seconds
	I0605 17:54:46.546712  471785 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502683 seconds
	I0605 17:54:46.546813  471785 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0605 17:54:46.546822  471785 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0605 17:54:46.561060  471785 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0605 17:54:46.561089  471785 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0605 17:54:47.092263  471785 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0605 17:54:47.092289  471785 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0605 17:54:47.092462  471785 kubeadm.go:322] [mark-control-plane] Marking the node multinode-292850 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0605 17:54:47.092467  471785 command_runner.go:130] > [mark-control-plane] Marking the node multinode-292850 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0605 17:54:47.604200  471785 kubeadm.go:322] [bootstrap-token] Using token: ie22bv.9azk12vyi50i0vrm
	I0605 17:54:47.606384  471785 out.go:204]   - Configuring RBAC rules ...
	I0605 17:54:47.604305  471785 command_runner.go:130] > [bootstrap-token] Using token: ie22bv.9azk12vyi50i0vrm
	I0605 17:54:47.606497  471785 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0605 17:54:47.606508  471785 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0605 17:54:47.612762  471785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0605 17:54:47.612810  471785 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0605 17:54:47.621704  471785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0605 17:54:47.621736  471785 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0605 17:54:47.630068  471785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0605 17:54:47.630079  471785 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0605 17:54:47.634857  471785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0605 17:54:47.634882  471785 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0605 17:54:47.639258  471785 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0605 17:54:47.639288  471785 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0605 17:54:47.657898  471785 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0605 17:54:47.657930  471785 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0605 17:54:47.910566  471785 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0605 17:54:47.910593  471785 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0605 17:54:48.053730  471785 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0605 17:54:48.053769  471785 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0605 17:54:48.053776  471785 kubeadm.go:322] 
	I0605 17:54:48.053833  471785 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0605 17:54:48.053843  471785 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0605 17:54:48.053847  471785 kubeadm.go:322] 
	I0605 17:54:48.053921  471785 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0605 17:54:48.053930  471785 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0605 17:54:48.053934  471785 kubeadm.go:322] 
	I0605 17:54:48.053959  471785 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0605 17:54:48.053967  471785 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0605 17:54:48.054023  471785 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0605 17:54:48.054031  471785 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0605 17:54:48.054079  471785 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0605 17:54:48.054087  471785 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0605 17:54:48.054092  471785 kubeadm.go:322] 
	I0605 17:54:48.054144  471785 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0605 17:54:48.054153  471785 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0605 17:54:48.054158  471785 kubeadm.go:322] 
	I0605 17:54:48.054203  471785 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0605 17:54:48.054212  471785 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0605 17:54:48.054217  471785 kubeadm.go:322] 
	I0605 17:54:48.054266  471785 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0605 17:54:48.054275  471785 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0605 17:54:48.054345  471785 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0605 17:54:48.054354  471785 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0605 17:54:48.054418  471785 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0605 17:54:48.054427  471785 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0605 17:54:48.054431  471785 kubeadm.go:322] 
	I0605 17:54:48.054521  471785 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0605 17:54:48.054533  471785 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0605 17:54:48.054615  471785 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0605 17:54:48.054624  471785 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0605 17:54:48.054628  471785 kubeadm.go:322] 
	I0605 17:54:48.054709  471785 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ie22bv.9azk12vyi50i0vrm \
	I0605 17:54:48.054718  471785 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ie22bv.9azk12vyi50i0vrm \
	I0605 17:54:48.054815  471785 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 \
	I0605 17:54:48.054822  471785 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 \
	I0605 17:54:48.054841  471785 kubeadm.go:322] 	--control-plane 
	I0605 17:54:48.054850  471785 command_runner.go:130] > 	--control-plane 
	I0605 17:54:48.054854  471785 kubeadm.go:322] 
	I0605 17:54:48.054935  471785 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0605 17:54:48.054943  471785 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0605 17:54:48.054948  471785 kubeadm.go:322] 
	I0605 17:54:48.055025  471785 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ie22bv.9azk12vyi50i0vrm \
	I0605 17:54:48.055033  471785 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ie22bv.9azk12vyi50i0vrm \
	I0605 17:54:48.055129  471785 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 
	I0605 17:54:48.055137  471785 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 
	I0605 17:54:48.059304  471785 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0605 17:54:48.059330  471785 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0605 17:54:48.059439  471785 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 17:54:48.059451  471785 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 17:54:48.059624  471785 kubeadm.go:322] W0605 17:54:32.706344    1079 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:54:48.059632  471785 command_runner.go:130] ! W0605 17:54:32.706344    1079 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:54:48.059800  471785 kubeadm.go:322] W0605 17:54:39.041085    1079 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:54:48.059808  471785 command_runner.go:130] ! W0605 17:54:39.041085    1079 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0605 17:54:48.059823  471785 cni.go:84] Creating CNI manager for ""
	I0605 17:54:48.059843  471785 cni.go:136] 1 nodes found, recommending kindnet
	I0605 17:54:48.063016  471785 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0605 17:54:48.065790  471785 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0605 17:54:48.082307  471785 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0605 17:54:48.082330  471785 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0605 17:54:48.082338  471785 command_runner.go:130] > Device: 3ah/58d	Inode: 3642593     Links: 1
	I0605 17:54:48.082346  471785 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0605 17:54:48.082352  471785 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0605 17:54:48.082358  471785 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0605 17:54:48.082364  471785 command_runner.go:130] > Change: 2023-06-05 17:31:01.224910109 +0000
	I0605 17:54:48.082370  471785 command_runner.go:130] >  Birth: 2023-06-05 17:31:01.180910227 +0000
	I0605 17:54:48.083420  471785 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0605 17:54:48.083439  471785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0605 17:54:48.152679  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0605 17:54:49.149797  471785 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0605 17:54:49.162713  471785 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0605 17:54:49.177007  471785 command_runner.go:130] > serviceaccount/kindnet created
	I0605 17:54:49.192135  471785 command_runner.go:130] > daemonset.apps/kindnet created
	I0605 17:54:49.197697  471785 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.044985768s)
	I0605 17:54:49.197747  471785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0605 17:54:49.197860  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:49.197926  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d minikube.k8s.io/name=multinode-292850 minikube.k8s.io/updated_at=2023_06_05T17_54_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:49.402818  471785 command_runner.go:130] > node/multinode-292850 labeled
	I0605 17:54:49.406955  471785 command_runner.go:130] > -16
	I0605 17:54:49.406986  471785 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0605 17:54:49.407010  471785 ops.go:34] apiserver oom_adj: -16
	I0605 17:54:49.407080  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:49.511715  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:50.012355  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:50.112186  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:50.512930  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:50.607538  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:51.012051  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:51.109415  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:51.512000  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:51.606930  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:52.012355  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:52.110572  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:52.512066  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:52.612291  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:53.012125  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:53.104775  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:53.511894  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:53.607797  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:54.012118  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:54.111090  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:54.512842  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:54.605376  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:55.012267  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:55.112217  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:55.511945  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:55.610415  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:56.012051  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:56.112304  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:56.511937  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:56.600188  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:57.012881  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:57.109653  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:57.512416  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:57.599383  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:58.012634  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:58.113612  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:58.512183  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:58.606708  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:59.012418  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:59.113862  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:54:59.512691  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:54:59.605955  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:55:00.012426  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:55:00.264477  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:55:00.511946  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:55:00.607846  471785 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0605 17:55:01.012285  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0605 17:55:01.155902  471785 command_runner.go:130] > NAME      SECRETS   AGE
	I0605 17:55:01.155950  471785 command_runner.go:130] > default   0         1s
	I0605 17:55:01.159686  471785 kubeadm.go:1076] duration metric: took 11.961865764s to wait for elevateKubeSystemPrivileges.
	I0605 17:55:01.159711  471785 kubeadm.go:406] StartCluster complete in 28.721225011s
	I0605 17:55:01.159727  471785 settings.go:142] acquiring lock: {Name:mk7ddedb44759cc39266e9c612309013659bd7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:55:01.159793  471785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:55:01.161103  471785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/kubeconfig: {Name:mkb77de9bf1ac5a664886fbfefd28a762472c016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:55:01.162202  471785 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:55:01.162813  471785 kapi.go:59] client config for multinode-292850: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:55:01.164733  471785 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0605 17:55:01.164758  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:01.164768  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:01.164782  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:01.165827  471785 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:55:01.165944  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0605 17:55:01.166248  471785 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0605 17:55:01.166404  471785 addons.go:66] Setting storage-provisioner=true in profile "multinode-292850"
	I0605 17:55:01.166432  471785 addons.go:228] Setting addon storage-provisioner=true in "multinode-292850"
	I0605 17:55:01.166494  471785 cert_rotation.go:137] Starting client certificate rotation controller
	I0605 17:55:01.166549  471785 host.go:66] Checking if "multinode-292850" exists ...
	I0605 17:55:01.166864  471785 addons.go:66] Setting default-storageclass=true in profile "multinode-292850"
	I0605 17:55:01.166888  471785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-292850"
	I0605 17:55:01.167224  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:55:01.167228  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:55:01.208978  471785 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:55:01.209268  471785 kapi.go:59] client config for multinode-292850: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:55:01.209619  471785 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0605 17:55:01.209636  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:01.209646  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:01.209654  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:01.246039  471785 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 17:55:01.248580  471785 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:55:01.248608  471785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0605 17:55:01.248680  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:55:01.293945  471785 round_trippers.go:574] Response Status: 200 OK in 84 milliseconds
	I0605 17:55:01.293970  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:01.293980  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:01.293987  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:01.293994  471785 round_trippers.go:580]     Content-Length: 109
	I0605 17:55:01.294000  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:01 GMT
	I0605 17:55:01.294007  471785 round_trippers.go:580]     Audit-Id: 34268e06-c767-4dad-9f41-b587636cb053
	I0605 17:55:01.294017  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:01.294023  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:01.297013  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:55:01.297957  471785 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"354"},"items":[]}
	I0605 17:55:01.298329  471785 addons.go:228] Setting addon default-storageclass=true in "multinode-292850"
	I0605 17:55:01.298371  471785 host.go:66] Checking if "multinode-292850" exists ...
	I0605 17:55:01.298817  471785 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:55:01.336261  471785 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0605 17:55:01.336289  471785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0605 17:55:01.336351  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:55:01.345011  471785 round_trippers.go:574] Response Status: 200 OK in 180 milliseconds
	I0605 17:55:01.345050  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:01.345059  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:01.345066  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:01.345073  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:01.345079  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:01.345089  471785 round_trippers.go:580]     Content-Length: 291
	I0605 17:55:01.345096  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:01 GMT
	I0605 17:55:01.345113  471785 round_trippers.go:580]     Audit-Id: 22555cc4-4439-4457-aa45-d912609d9101
	I0605 17:55:01.353000  471785 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caff1eae-79ac-49ee-ac75-910d1f9235c3","resourceVersion":"354","creationTimestamp":"2023-06-05T17:54:47Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0605 17:55:01.353447  471785 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caff1eae-79ac-49ee-ac75-910d1f9235c3","resourceVersion":"354","creationTimestamp":"2023-06-05T17:54:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0605 17:55:01.353558  471785 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0605 17:55:01.353567  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:01.353576  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:01.353583  471785 round_trippers.go:473]     Content-Type: application/json
	I0605 17:55:01.353590  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:01.371041  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:55:01.386574  471785 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0605 17:55:01.386595  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:01.386605  471785 round_trippers.go:580]     Content-Length: 291
	I0605 17:55:01.386612  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:01 GMT
	I0605 17:55:01.386619  471785 round_trippers.go:580]     Audit-Id: b186004a-7554-497b-a8c6-041e918132f9
	I0605 17:55:01.386625  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:01.386632  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:01.386639  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:01.386645  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:01.387702  471785 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caff1eae-79ac-49ee-ac75-910d1f9235c3","resourceVersion":"355","creationTimestamp":"2023-06-05T17:54:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0605 17:55:01.427173  471785 command_runner.go:130] > apiVersion: v1
	I0605 17:55:01.427244  471785 command_runner.go:130] > data:
	I0605 17:55:01.427264  471785 command_runner.go:130] >   Corefile: |
	I0605 17:55:01.427288  471785 command_runner.go:130] >     .:53 {
	I0605 17:55:01.427328  471785 command_runner.go:130] >         errors
	I0605 17:55:01.427355  471785 command_runner.go:130] >         health {
	I0605 17:55:01.427377  471785 command_runner.go:130] >            lameduck 5s
	I0605 17:55:01.427411  471785 command_runner.go:130] >         }
	I0605 17:55:01.427435  471785 command_runner.go:130] >         ready
	I0605 17:55:01.427462  471785 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0605 17:55:01.427499  471785 command_runner.go:130] >            pods insecure
	I0605 17:55:01.427525  471785 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0605 17:55:01.427547  471785 command_runner.go:130] >            ttl 30
	I0605 17:55:01.427585  471785 command_runner.go:130] >         }
	I0605 17:55:01.427610  471785 command_runner.go:130] >         prometheus :9153
	I0605 17:55:01.427631  471785 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0605 17:55:01.427666  471785 command_runner.go:130] >            max_concurrent 1000
	I0605 17:55:01.427690  471785 command_runner.go:130] >         }
	I0605 17:55:01.427714  471785 command_runner.go:130] >         cache 30
	I0605 17:55:01.427749  471785 command_runner.go:130] >         loop
	I0605 17:55:01.427775  471785 command_runner.go:130] >         reload
	I0605 17:55:01.427797  471785 command_runner.go:130] >         loadbalance
	I0605 17:55:01.427835  471785 command_runner.go:130] >     }
	I0605 17:55:01.427858  471785 command_runner.go:130] > kind: ConfigMap
	I0605 17:55:01.427878  471785 command_runner.go:130] > metadata:
	I0605 17:55:01.427914  471785 command_runner.go:130] >   creationTimestamp: "2023-06-05T17:54:47Z"
	I0605 17:55:01.427989  471785 command_runner.go:130] >   name: coredns
	I0605 17:55:01.428009  471785 command_runner.go:130] >   namespace: kube-system
	I0605 17:55:01.428031  471785 command_runner.go:130] >   resourceVersion: "229"
	I0605 17:55:01.428066  471785 command_runner.go:130] >   uid: 2ebfbba3-ed30-4d3f-a40d-ec6ed4457477
	I0605 17:55:01.433766  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0605 17:55:01.627204  471785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0605 17:55:01.633653  471785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 17:55:01.888645  471785 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0605 17:55:01.888669  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:01.888689  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:01.888702  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:01.935479  471785 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0605 17:55:01.935518  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:01.935534  471785 round_trippers.go:580]     Audit-Id: ac72673e-9527-462a-8a48-9f5c42fb7d2f
	I0605 17:55:01.935545  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:01.935551  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:01.935565  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:01.935577  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:01.935584  471785 round_trippers.go:580]     Content-Length: 291
	I0605 17:55:01.935601  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:01 GMT
	I0605 17:55:01.936031  471785 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caff1eae-79ac-49ee-ac75-910d1f9235c3","resourceVersion":"365","creationTimestamp":"2023-06-05T17:54:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0605 17:55:01.936181  471785 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-292850" context rescaled to 1 replicas
	I0605 17:55:01.936224  471785 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 17:55:01.938840  471785 out.go:177] * Verifying Kubernetes components...
	I0605 17:55:01.940761  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:55:02.391012  471785 command_runner.go:130] > configmap/coredns replaced
	I0605 17:55:02.396860  471785 start.go:916] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0605 17:55:02.396915  471785 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0605 17:55:02.445520  471785 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0605 17:55:02.453220  471785 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0605 17:55:02.470077  471785 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0605 17:55:02.480680  471785 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0605 17:55:02.491551  471785 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0605 17:55:02.508680  471785 command_runner.go:130] > pod/storage-provisioner created
	I0605 17:55:02.512674  471785 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0605 17:55:02.510858  471785 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:55:02.515221  471785 addons.go:499] enable addons completed in 1.348970898s: enabled=[default-storageclass storage-provisioner]
	I0605 17:55:02.515589  471785 kapi.go:59] client config for multinode-292850: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:55:02.515946  471785 node_ready.go:35] waiting up to 6m0s for node "multinode-292850" to be "Ready" ...
	I0605 17:55:02.516032  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:02.516043  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:02.516060  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:02.516071  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:02.524121  471785 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0605 17:55:02.524148  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:02.524159  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:02.524166  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:02.524173  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:02 GMT
	I0605 17:55:02.524182  471785 round_trippers.go:580]     Audit-Id: f10dd863-43b5-4c89-801c-5670c9753b7b
	I0605 17:55:02.524193  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:02.524200  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:02.525295  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:03.026405  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:03.026429  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:03.026440  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:03.026447  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:03.029437  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:03.029459  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:03.029468  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:03.029475  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:03.029482  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:03 GMT
	I0605 17:55:03.029492  471785 round_trippers.go:580]     Audit-Id: 53bbdbd9-2be9-4f68-92c2-53c20d9a58b6
	I0605 17:55:03.029499  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:03.029506  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:03.029621  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:03.527232  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:03.527264  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:03.527276  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:03.527287  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:03.530151  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:03.530175  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:03.530184  471785 round_trippers.go:580]     Audit-Id: d4bbca6e-e2f3-4467-bc03-ac33e159d7ec
	I0605 17:55:03.530191  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:03.530198  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:03.530205  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:03.530212  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:03.530239  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:03 GMT
	I0605 17:55:03.530691  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:04.026247  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:04.026271  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:04.026286  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:04.026294  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:04.029092  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:04.029128  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:04.029138  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:04 GMT
	I0605 17:55:04.029145  471785 round_trippers.go:580]     Audit-Id: 59acda1f-8163-4d37-bd41-9678d22bed22
	I0605 17:55:04.029151  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:04.029158  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:04.029165  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:04.029172  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:04.029344  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:04.526847  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:04.526872  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:04.526883  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:04.526890  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:04.529450  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:04.529513  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:04.529535  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:04.529558  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:04.529594  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:04.529623  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:04 GMT
	I0605 17:55:04.529645  471785 round_trippers.go:580]     Audit-Id: 453dcda2-6ea8-401e-916c-07ff0c008012
	I0605 17:55:04.529667  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:04.529790  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:04.530215  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:05.027184  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:05.027207  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:05.027217  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:05.027225  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:05.029937  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:05.030045  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:05.030068  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:05 GMT
	I0605 17:55:05.030092  471785 round_trippers.go:580]     Audit-Id: b1339c92-26f7-418a-9475-5e389db14969
	I0605 17:55:05.030129  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:05.030157  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:05.030183  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:05.030200  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:05.030334  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:05.526685  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:05.526725  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:05.526739  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:05.526750  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:05.529651  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:05.529715  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:05.529724  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:05 GMT
	I0605 17:55:05.529731  471785 round_trippers.go:580]     Audit-Id: 4ae95e01-f2fe-413d-983f-0b0bea01bdb1
	I0605 17:55:05.529738  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:05.529745  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:05.529751  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:05.529759  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:05.529849  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:06.026695  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:06.026724  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:06.026735  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:06.026743  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:06.029769  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:06.029791  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:06.029801  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:06.029808  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:06.029815  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:06 GMT
	I0605 17:55:06.029822  471785 round_trippers.go:580]     Audit-Id: 2a3bae35-8293-4777-a368-4acb2deb5f12
	I0605 17:55:06.029830  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:06.029836  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:06.030721  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:06.526957  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:06.526981  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:06.526993  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:06.527021  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:06.529538  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:06.529565  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:06.529575  471785 round_trippers.go:580]     Audit-Id: a8f62d3b-56b6-4562-98a6-b934e4b78e93
	I0605 17:55:06.529582  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:06.529591  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:06.529598  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:06.529605  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:06.529617  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:06 GMT
	I0605 17:55:06.529814  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:07.026971  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:07.026998  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:07.027009  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:07.027017  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:07.029729  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:07.029753  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:07.029764  471785 round_trippers.go:580]     Audit-Id: 31980451-9604-4e0f-b19e-a3f4d345877c
	I0605 17:55:07.029771  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:07.029778  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:07.029784  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:07.029791  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:07.029798  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:07 GMT
	I0605 17:55:07.030020  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:07.030467  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:07.526602  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:07.526625  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:07.526634  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:07.526642  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:07.529406  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:07.529448  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:07.529459  471785 round_trippers.go:580]     Audit-Id: d42d72b5-58a9-434c-b440-d7f22cbc6054
	I0605 17:55:07.529466  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:07.529473  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:07.529480  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:07.529487  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:07.529494  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:07 GMT
	I0605 17:55:07.529859  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:08.027098  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:08.027126  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:08.027137  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:08.027145  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:08.031057  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:08.031078  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:08.031091  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:08.031098  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:08 GMT
	I0605 17:55:08.031105  471785 round_trippers.go:580]     Audit-Id: 8432fa90-f915-4684-9b45-92c020de24fa
	I0605 17:55:08.031111  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:08.031119  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:08.031126  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:08.033552  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:08.526233  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:08.526259  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:08.526270  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:08.526277  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:08.528941  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:08.528968  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:08.528978  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:08.528985  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:08.528993  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:08 GMT
	I0605 17:55:08.529000  471785 round_trippers.go:580]     Audit-Id: 3b484ba0-21f3-4c3f-9e2a-9f5585528122
	I0605 17:55:08.529006  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:08.529015  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:08.529126  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:09.026197  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:09.026223  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:09.026235  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:09.026242  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:09.029109  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:09.029139  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:09.029149  471785 round_trippers.go:580]     Audit-Id: 12172c1b-adcc-4709-9f8f-ce92a458b79e
	I0605 17:55:09.029157  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:09.029164  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:09.029170  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:09.029177  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:09.029184  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:09 GMT
	I0605 17:55:09.029317  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:09.526211  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:09.526234  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:09.526244  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:09.526253  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:09.528941  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:09.528966  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:09.528976  471785 round_trippers.go:580]     Audit-Id: 5f2eef4d-2791-4fe3-bb46-c5ca2f7d5108
	I0605 17:55:09.528983  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:09.528990  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:09.528997  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:09.529004  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:09.529011  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:09 GMT
	I0605 17:55:09.529223  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:09.529645  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:10.027215  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:10.027241  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:10.027254  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:10.027262  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:10.030528  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:10.030553  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:10.030563  471785 round_trippers.go:580]     Audit-Id: 5243e81d-0255-4059-9219-f2ccbb532ab5
	I0605 17:55:10.030570  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:10.030578  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:10.030585  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:10.030593  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:10.030600  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:10 GMT
	I0605 17:55:10.030739  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:10.526917  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:10.526942  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:10.526952  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:10.526960  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:10.529675  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:10.529703  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:10.529713  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:10.529720  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:10 GMT
	I0605 17:55:10.529727  471785 round_trippers.go:580]     Audit-Id: a08c0924-24c1-4cd8-a4e8-0fff6cf22357
	I0605 17:55:10.529734  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:10.529740  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:10.529749  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:10.529848  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:11.027174  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:11.027201  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:11.027212  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:11.027220  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:11.029903  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:11.029925  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:11.029934  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:11.029942  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:11 GMT
	I0605 17:55:11.029949  471785 round_trippers.go:580]     Audit-Id: 3fccfed4-6aa3-4117-815a-9abe63a1bf53
	I0605 17:55:11.029955  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:11.029962  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:11.029968  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:11.030158  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:11.526807  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:11.526837  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:11.526847  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:11.526855  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:11.529434  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:11.529459  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:11.529469  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:11.529476  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:11.529487  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:11.529495  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:11.529502  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:11 GMT
	I0605 17:55:11.529509  471785 round_trippers.go:580]     Audit-Id: 48f17231-641b-48fc-88b8-c193396f6b8e
	I0605 17:55:11.529873  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:11.530286  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:12.026419  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:12.026445  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:12.026457  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:12.026465  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:12.029361  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:12.029382  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:12.029391  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:12.029398  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:12.029417  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:12.029425  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:12.029431  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:12 GMT
	I0605 17:55:12.029438  471785 round_trippers.go:580]     Audit-Id: 1bbe1ec5-04bf-4cb2-ad0c-6352bb02162a
	I0605 17:55:12.029586  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:12.526835  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:12.526862  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:12.526873  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:12.526886  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:12.529620  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:12.529651  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:12.529661  471785 round_trippers.go:580]     Audit-Id: 84f671eb-0986-4d99-af71-753dd7b6b252
	I0605 17:55:12.529673  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:12.529690  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:12.529697  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:12.529708  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:12.529716  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:12 GMT
	I0605 17:55:12.529852  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:13.026909  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:13.026937  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:13.026948  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:13.026955  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:13.029890  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:13.029914  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:13.029923  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:13.029930  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:13.029937  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:13.029944  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:13 GMT
	I0605 17:55:13.029951  471785 round_trippers.go:580]     Audit-Id: 3a74dac5-6e07-4bb9-8a53-c2dfca04c39a
	I0605 17:55:13.029958  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:13.030109  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:13.526217  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:13.526243  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:13.526254  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:13.526262  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:13.528945  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:13.528973  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:13.528982  471785 round_trippers.go:580]     Audit-Id: 481209f1-8467-4f02-aed1-3201c0a230f6
	I0605 17:55:13.528989  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:13.528996  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:13.529003  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:13.529010  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:13.529017  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:13 GMT
	I0605 17:55:13.529110  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:14.026211  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:14.026235  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:14.026249  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:14.026257  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:14.028989  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:14.029026  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:14.029037  471785 round_trippers.go:580]     Audit-Id: c8470282-bc08-40e0-b426-eada54723ef5
	I0605 17:55:14.029044  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:14.029052  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:14.029059  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:14.029067  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:14.029080  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:14 GMT
	I0605 17:55:14.029238  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:14.029654  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:14.526341  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:14.526366  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:14.526377  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:14.526385  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:14.529099  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:14.529128  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:14.529137  471785 round_trippers.go:580]     Audit-Id: f4881119-e9fd-4963-8543-f0a40d41d86e
	I0605 17:55:14.529144  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:14.529151  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:14.529158  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:14.529164  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:14.529171  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:14 GMT
	I0605 17:55:14.529297  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:15.026483  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:15.026520  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:15.026534  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:15.026544  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:15.030416  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:15.030448  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:15.030460  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:15.030468  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:15.030476  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:15 GMT
	I0605 17:55:15.030483  471785 round_trippers.go:580]     Audit-Id: 4a8ac8d9-b1b1-42ed-a67c-6c7e64f349ba
	I0605 17:55:15.030490  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:15.030497  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:15.030635  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:15.527161  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:15.527184  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:15.527193  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:15.527202  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:15.531097  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:15.531127  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:15.531140  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:15 GMT
	I0605 17:55:15.531154  471785 round_trippers.go:580]     Audit-Id: 053f530c-8f20-4003-83ef-0fe8b932e428
	I0605 17:55:15.531161  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:15.531168  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:15.531176  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:15.531189  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:15.531306  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:16.027040  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:16.027062  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:16.027072  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:16.027080  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:16.029692  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:16.029722  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:16.029731  471785 round_trippers.go:580]     Audit-Id: a2be8587-5aa0-4ebf-bc6c-0d90b68daf6f
	I0605 17:55:16.029738  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:16.029744  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:16.029751  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:16.029758  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:16.029769  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:16 GMT
	I0605 17:55:16.029884  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:16.030311  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:16.527066  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:16.527097  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:16.527108  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:16.527116  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:16.529965  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:16.529985  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:16.529995  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:16 GMT
	I0605 17:55:16.530001  471785 round_trippers.go:580]     Audit-Id: 19234955-e4c0-46f5-9746-4c2e70a156a8
	I0605 17:55:16.530008  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:16.530015  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:16.530021  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:16.530028  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:16.530134  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:17.027162  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:17.027183  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:17.027193  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:17.027201  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:17.030005  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:17.030033  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:17.030044  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:17 GMT
	I0605 17:55:17.030060  471785 round_trippers.go:580]     Audit-Id: 1d9da347-29ea-47d3-9585-1d2292fdfb2c
	I0605 17:55:17.030075  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:17.030083  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:17.030092  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:17.030108  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:17.030439  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:17.526666  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:17.526690  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:17.526701  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:17.526709  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:17.529692  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:17.529714  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:17.529723  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:17 GMT
	I0605 17:55:17.529730  471785 round_trippers.go:580]     Audit-Id: cdcff4aa-b529-43cc-a955-f6dda4912c0d
	I0605 17:55:17.529737  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:17.529743  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:17.529750  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:17.529757  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:17.530086  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:18.027182  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:18.027207  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:18.027218  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:18.027226  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:18.030048  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:18.030073  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:18.030084  471785 round_trippers.go:580]     Audit-Id: 8b7e5c23-a21a-4619-b019-fb02b316cc8f
	I0605 17:55:18.030091  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:18.030098  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:18.030106  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:18.030113  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:18.030119  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:18 GMT
	I0605 17:55:18.030339  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:18.030772  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:18.526987  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:18.527016  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:18.527027  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:18.527034  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:18.529722  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:18.529747  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:18.529757  471785 round_trippers.go:580]     Audit-Id: 4bda1e84-acba-4ffa-9811-a9091103f649
	I0605 17:55:18.529765  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:18.529771  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:18.529778  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:18.529784  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:18.529791  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:18 GMT
	I0605 17:55:18.529916  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:19.027142  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:19.027167  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:19.027178  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:19.027187  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:19.030008  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:19.030038  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:19.030047  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:19 GMT
	I0605 17:55:19.030054  471785 round_trippers.go:580]     Audit-Id: 4cf2899c-8d58-4d14-a0f6-46e1a930f9ff
	I0605 17:55:19.030061  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:19.030067  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:19.030074  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:19.030081  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:19.030194  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:19.526303  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:19.526330  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:19.526341  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:19.526349  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:19.529130  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:19.529153  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:19.529163  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:19.529170  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:19.529178  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:19 GMT
	I0605 17:55:19.529185  471785 round_trippers.go:580]     Audit-Id: 20afeab2-3c64-4281-ac5b-8d482f90e2f3
	I0605 17:55:19.529192  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:19.529198  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:19.529289  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:20.026320  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:20.026350  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:20.026362  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:20.026369  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:20.029438  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:20.029471  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:20.029481  471785 round_trippers.go:580]     Audit-Id: 0e1f0daa-8bad-486c-80db-6c58de2327a5
	I0605 17:55:20.029489  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:20.029495  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:20.029502  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:20.029510  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:20.029521  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:20 GMT
	I0605 17:55:20.029655  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:20.526866  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:20.526890  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:20.526901  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:20.526908  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:20.529534  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:20.529562  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:20.529573  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:20 GMT
	I0605 17:55:20.529580  471785 round_trippers.go:580]     Audit-Id: 5b428c1c-70ce-48a1-ab5b-04bf47977dc5
	I0605 17:55:20.529587  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:20.529594  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:20.529601  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:20.529608  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:20.529734  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:20.530141  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:21.026142  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:21.026168  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:21.026180  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:21.026188  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:21.029115  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:21.029139  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:21.029149  471785 round_trippers.go:580]     Audit-Id: d3aac769-bc81-4320-a712-25cb2a2b8dbe
	I0605 17:55:21.029156  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:21.029162  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:21.029169  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:21.029176  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:21.029184  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:21 GMT
	I0605 17:55:21.029321  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:21.527144  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:21.527168  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:21.527178  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:21.527186  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:21.530255  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:21.530277  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:21.530286  471785 round_trippers.go:580]     Audit-Id: 940db9d7-284c-43ae-ae65-66ba72ca56aa
	I0605 17:55:21.530293  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:21.530300  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:21.530307  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:21.530314  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:21.530320  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:21 GMT
	I0605 17:55:21.530542  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:22.027214  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:22.027239  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:22.027250  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:22.027258  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:22.030009  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:22.030032  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:22.030041  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:22.030048  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:22.030055  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:22.030062  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:22.030069  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:22 GMT
	I0605 17:55:22.030076  471785 round_trippers.go:580]     Audit-Id: 5442d437-9208-4d09-9a02-edb0b72e57da
	I0605 17:55:22.030216  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:22.526317  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:22.526341  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:22.526352  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:22.526359  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:22.529045  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:22.529066  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:22.529075  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:22.529082  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:22.529088  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:22 GMT
	I0605 17:55:22.529103  471785 round_trippers.go:580]     Audit-Id: 08c6f393-504d-4cf4-b24b-ac1ddbedc38e
	I0605 17:55:22.529111  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:22.529118  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:22.529237  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:23.026288  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:23.026314  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:23.026325  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:23.026332  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:23.029095  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:23.029117  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:23.029126  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:23 GMT
	I0605 17:55:23.029134  471785 round_trippers.go:580]     Audit-Id: eec7cc7b-2a49-4f65-8133-5cc5e323973a
	I0605 17:55:23.029146  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:23.029159  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:23.029171  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:23.029181  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:23.029298  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:23.029749  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:23.526228  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:23.526252  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:23.526262  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:23.526270  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:23.528847  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:23.528873  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:23.528883  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:23 GMT
	I0605 17:55:23.528890  471785 round_trippers.go:580]     Audit-Id: 40426767-4f8e-42b5-b992-bed3c4d22c7f
	I0605 17:55:23.528896  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:23.528903  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:23.528910  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:23.528920  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:23.529028  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:24.027147  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:24.027170  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:24.027181  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:24.027189  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:24.029834  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:24.029856  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:24.029865  471785 round_trippers.go:580]     Audit-Id: 0ccddd84-af92-4365-9699-45e41880abb2
	I0605 17:55:24.029872  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:24.029879  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:24.029885  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:24.029892  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:24.029899  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:24 GMT
	I0605 17:55:24.030053  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:24.526780  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:24.526826  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:24.526847  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:24.526854  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:24.529410  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:24.529436  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:24.529446  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:24.529453  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:24 GMT
	I0605 17:55:24.529460  471785 round_trippers.go:580]     Audit-Id: 372a97c8-d6d5-4aaf-a483-4d2d4b4a534b
	I0605 17:55:24.529467  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:24.529474  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:24.529484  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:24.529777  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:25.026417  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:25.026443  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:25.026455  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:25.026463  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:25.029701  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:25.029727  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:25.029737  471785 round_trippers.go:580]     Audit-Id: 92447889-9057-47b5-8a28-6e49af536360
	I0605 17:55:25.029830  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:25.029838  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:25.029845  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:25.029852  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:25.029859  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:25 GMT
	I0605 17:55:25.030004  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:25.030444  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:25.527115  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:25.527141  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:25.527151  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:25.527158  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:25.529867  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:25.529929  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:25.529954  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:25.529962  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:25.529969  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:25.529979  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:25 GMT
	I0605 17:55:25.529986  471785 round_trippers.go:580]     Audit-Id: 40ea2de4-f369-4110-babd-b5f0978ce4c1
	I0605 17:55:25.529996  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:25.530270  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:26.026949  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:26.026976  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:26.026989  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:26.026996  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:26.029849  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:26.029870  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:26.029879  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:26.029885  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:26.029893  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:26.029899  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:26 GMT
	I0605 17:55:26.029906  471785 round_trippers.go:580]     Audit-Id: eb4b8823-3113-46e2-8b5d-740c60061cae
	I0605 17:55:26.029912  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:26.030063  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:26.527192  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:26.527226  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:26.527248  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:26.527256  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:26.529950  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:26.529976  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:26.529986  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:26 GMT
	I0605 17:55:26.529993  471785 round_trippers.go:580]     Audit-Id: 7e8d2624-a37e-4294-b69c-1050fe67cd45
	I0605 17:55:26.530000  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:26.530007  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:26.530013  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:26.530020  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:26.530162  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:27.026286  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:27.026314  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:27.026347  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:27.026355  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:27.028931  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:27.028951  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:27.028961  471785 round_trippers.go:580]     Audit-Id: 24eabccd-8db7-4b2e-a79c-ff500f08f9e7
	I0605 17:55:27.028968  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:27.028974  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:27.028981  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:27.028988  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:27.028994  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:27 GMT
	I0605 17:55:27.029178  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:27.526826  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:27.526850  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:27.526860  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:27.526868  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:27.529604  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:27.529628  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:27.529637  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:27 GMT
	I0605 17:55:27.529644  471785 round_trippers.go:580]     Audit-Id: a2235dea-a5ed-43f0-b225-429c82e9f172
	I0605 17:55:27.529650  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:27.529657  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:27.529664  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:27.529670  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:27.529782  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:27.530179  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:28.026919  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:28.026942  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:28.026953  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:28.026960  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:28.029832  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:28.029857  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:28.029871  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:28.029884  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:28 GMT
	I0605 17:55:28.029898  471785 round_trippers.go:580]     Audit-Id: 7e066fa1-d637-45e9-ab70-d12b28453bd5
	I0605 17:55:28.029909  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:28.029917  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:28.029924  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:28.030076  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:28.526221  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:28.526246  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:28.526257  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:28.526264  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:28.528971  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:28.529005  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:28.529015  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:28.529022  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:28.529029  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:28 GMT
	I0605 17:55:28.529040  471785 round_trippers.go:580]     Audit-Id: 11cb3c85-85de-44cb-bcdc-2bddc66e7b9b
	I0605 17:55:28.529051  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:28.529061  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:28.529345  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:29.026661  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:29.026686  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:29.026697  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:29.026704  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:29.029309  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:29.029331  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:29.029340  471785 round_trippers.go:580]     Audit-Id: 769401c1-4997-4e2c-a0b0-55b083fed00c
	I0605 17:55:29.029347  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:29.029354  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:29.029361  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:29.029367  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:29.029374  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:29 GMT
	I0605 17:55:29.029589  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:29.527239  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:29.527262  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:29.527272  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:29.527280  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:29.529922  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:29.529949  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:29.529959  471785 round_trippers.go:580]     Audit-Id: 19bd15e3-34fc-45c4-8d21-1f708e5df225
	I0605 17:55:29.529967  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:29.529974  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:29.529981  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:29.529988  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:29.529999  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:29 GMT
	I0605 17:55:29.530159  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:29.530571  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:30.027177  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:30.027202  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:30.027213  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:30.027220  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:30.030725  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:30.030771  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:30.030783  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:30.030791  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:30.030799  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:30.030806  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:30.030819  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:30 GMT
	I0605 17:55:30.030826  471785 round_trippers.go:580]     Audit-Id: bdb35714-82b3-4790-bb14-cf96b4d70cd5
	I0605 17:55:30.030987  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:30.527181  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:30.527213  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:30.527225  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:30.527242  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:30.529996  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:30.530027  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:30.530038  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:30.530045  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:30.530056  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:30 GMT
	I0605 17:55:30.530063  471785 round_trippers.go:580]     Audit-Id: 52ca44be-4924-4e3e-ae16-74d2af3ec357
	I0605 17:55:30.530072  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:30.530079  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:30.530308  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:31.026225  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:31.026247  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:31.026260  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:31.026268  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:31.028901  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:31.028924  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:31.028934  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:31.028941  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:31.028949  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:31.028956  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:31 GMT
	I0605 17:55:31.028962  471785 round_trippers.go:580]     Audit-Id: e856a4ea-9dfb-49c7-aa4a-7e65eae1d9b6
	I0605 17:55:31.028973  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:31.029106  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:31.526603  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:31.526630  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:31.526640  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:31.526647  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:31.529341  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:31.529365  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:31.529374  471785 round_trippers.go:580]     Audit-Id: 5f70814b-1b98-4847-a484-a4a638247935
	I0605 17:55:31.529381  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:31.529387  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:31.529394  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:31.529401  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:31.529408  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:31 GMT
	I0605 17:55:31.529499  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:32.026398  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:32.026424  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:32.026444  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:32.026451  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:32.029138  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:32.029158  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:32.029167  471785 round_trippers.go:580]     Audit-Id: dec9dd9e-3218-4bff-98f5-06b32536ded0
	I0605 17:55:32.029174  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:32.029181  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:32.029188  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:32.029196  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:32.029204  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:32 GMT
	I0605 17:55:32.029344  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:32.029772  471785 node_ready.go:58] node "multinode-292850" has status "Ready":"False"
	I0605 17:55:32.526227  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:32.526253  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:32.526264  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:32.526272  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:32.528873  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:32.528894  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:32.528903  471785 round_trippers.go:580]     Audit-Id: aa7a2f38-10a7-4663-aaa5-893e7ced324b
	I0605 17:55:32.528910  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:32.528917  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:32.528923  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:32.528930  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:32.528937  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:32 GMT
	I0605 17:55:32.529073  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"313","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0605 17:55:33.026427  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:33.026455  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:33.026466  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:33.026475  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:33.029292  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:33.029317  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:33.029326  471785 round_trippers.go:580]     Audit-Id: 34917bf0-8bda-4fcb-a83f-2d05549e20c1
	I0605 17:55:33.029333  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:33.029340  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:33.029346  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:33.029353  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:33.029361  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:33 GMT
	I0605 17:55:33.029482  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:33.029912  471785 node_ready.go:49] node "multinode-292850" has status "Ready":"True"
	I0605 17:55:33.029922  471785 node_ready.go:38] duration metric: took 30.513958253s waiting for node "multinode-292850" to be "Ready" ...
	I0605 17:55:33.029931  471785 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:55:33.030037  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:55:33.030042  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:33.030050  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:33.030057  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:33.034065  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:33.034091  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:33.034100  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:33.034108  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:33 GMT
	I0605 17:55:33.034114  471785 round_trippers.go:580]     Audit-Id: 63a8994d-90fc-4e33-b2c2-e9aa5594aa75
	I0605 17:55:33.034121  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:33.034128  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:33.034135  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:33.034514  471785 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"406","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I0605 17:55:33.038826  471785 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-g9m8h" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:33.038953  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-g9m8h
	I0605 17:55:33.038966  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:33.038977  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:33.038984  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:33.041903  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:33.041929  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:33.041939  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:33 GMT
	I0605 17:55:33.041946  471785 round_trippers.go:580]     Audit-Id: 819cd3e0-7f22-4887-bc67-96459a3e1b2b
	I0605 17:55:33.041954  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:33.042012  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:33.042019  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:33.042028  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:33.042549  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"406","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0605 17:55:33.043150  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:33.043168  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:33.043178  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:33.043185  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:33.045854  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:33.045875  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:33.045884  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:33.045891  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:33.045897  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:33.045904  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:33.045912  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:33 GMT
	I0605 17:55:33.045919  471785 round_trippers.go:580]     Audit-Id: 1d63e97f-5a81-47d6-87d5-1d60cde45e03
	I0605 17:55:33.046040  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:33.547196  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-g9m8h
	I0605 17:55:33.547220  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:33.547230  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:33.547238  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:33.549934  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:33.549955  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:33.549964  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:33.549971  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:33.549978  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:33.549985  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:33.549992  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:33 GMT
	I0605 17:55:33.549999  471785 round_trippers.go:580]     Audit-Id: e1524c90-1640-472c-b6e1-e1ea0193685f
	I0605 17:55:33.550102  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"406","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0605 17:55:33.550625  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:33.550633  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:33.550641  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:33.550650  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:33.553088  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:33.553155  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:33.553177  471785 round_trippers.go:580]     Audit-Id: 022935c1-d82a-4bfd-8f6a-31b911810a86
	I0605 17:55:33.553200  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:33.553233  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:33.553287  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:33.553300  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:33.553308  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:33 GMT
	I0605 17:55:33.553440  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.046981  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-g9m8h
	I0605 17:55:34.047009  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.047019  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.047027  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.049942  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.050012  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.050035  471785 round_trippers.go:580]     Audit-Id: 866db09b-c345-4ef5-8cbd-af06ad07d118
	I0605 17:55:34.050059  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.050096  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.050122  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.050140  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.050147  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.050288  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"406","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0605 17:55:34.050846  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:34.050862  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.050871  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.050880  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.053417  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.053446  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.053456  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.053464  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.053471  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.053478  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.053485  471785 round_trippers.go:580]     Audit-Id: ae3065ff-7213-4f28-9317-feb625a42bd5
	I0605 17:55:34.053491  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.053754  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.547426  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-g9m8h
	I0605 17:55:34.547453  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.547464  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.547472  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.550406  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.550482  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.550507  471785 round_trippers.go:580]     Audit-Id: abb3f77b-09aa-4e64-8949-b03e621ee4b7
	I0605 17:55:34.550530  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.550576  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.550603  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.550645  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.550671  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.550808  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"420","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0605 17:55:34.551391  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:34.551408  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.551417  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.551425  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.554121  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.554149  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.554159  471785 round_trippers.go:580]     Audit-Id: 9b92c262-81f6-41d9-8afc-a40359de761a
	I0605 17:55:34.554167  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.554173  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.554180  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.554187  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.554194  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.554464  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.554878  471785 pod_ready.go:92] pod "coredns-5d78c9869d-g9m8h" in "kube-system" namespace has status "Ready":"True"
	I0605 17:55:34.554898  471785 pod_ready.go:81] duration metric: took 1.516028891s waiting for pod "coredns-5d78c9869d-g9m8h" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.554909  471785 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.554972  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-292850
	I0605 17:55:34.554983  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.554992  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.554999  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.557672  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.557710  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.557721  471785 round_trippers.go:580]     Audit-Id: 64f13ff9-07ed-4570-af31-e9850c722125
	I0605 17:55:34.557729  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.557739  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.557746  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.557755  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.557762  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.557891  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-292850","namespace":"kube-system","uid":"9851a436-29a1-4ee7-b3b0-ab3afbdeb909","resourceVersion":"390","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"5f348b4a3dbb4e3d988ba05637c6c0d9","kubernetes.io/config.mirror":"5f348b4a3dbb4e3d988ba05637c6c0d9","kubernetes.io/config.seen":"2023-06-05T17:54:47.979829524Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0605 17:55:34.558349  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:34.558366  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.558375  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.558382  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.560837  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.560861  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.560870  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.560877  471785 round_trippers.go:580]     Audit-Id: d683258d-4103-4a4c-844f-61351d588488
	I0605 17:55:34.560884  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.560891  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.560902  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.560913  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.561042  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.561436  471785 pod_ready.go:92] pod "etcd-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:55:34.561452  471785 pod_ready.go:81] duration metric: took 6.531173ms waiting for pod "etcd-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.561474  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.561540  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-292850
	I0605 17:55:34.561549  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.561557  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.561564  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.564077  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.564101  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.564110  471785 round_trippers.go:580]     Audit-Id: 996e5804-c7d5-4d56-8f4e-ebc163f36e7e
	I0605 17:55:34.564117  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.564124  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.564131  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.564138  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.564144  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.564289  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-292850","namespace":"kube-system","uid":"93831e67-92d5-43b4-9c66-5bce71b7550b","resourceVersion":"391","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7df81ebc702fe430d81216befbf43af3","kubernetes.io/config.mirror":"7df81ebc702fe430d81216befbf43af3","kubernetes.io/config.seen":"2023-06-05T17:54:47.979831288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0605 17:55:34.564822  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:34.564840  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.564848  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.564856  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.567338  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.567363  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.567373  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.567380  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.567387  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.567394  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.567403  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.567416  471785 round_trippers.go:580]     Audit-Id: 0f46f374-32c6-4ffe-bd17-0cca635e231c
	I0605 17:55:34.567547  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.567968  471785 pod_ready.go:92] pod "kube-apiserver-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:55:34.567989  471785 pod_ready.go:81] duration metric: took 6.501233ms waiting for pod "kube-apiserver-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.568001  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.568069  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-292850
	I0605 17:55:34.568080  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.568088  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.568095  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.570730  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.570753  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.570762  471785 round_trippers.go:580]     Audit-Id: 582efae4-43e4-4d69-9931-d8675b9bf793
	I0605 17:55:34.570769  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.570781  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.570791  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.570805  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.570812  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.570983  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-292850","namespace":"kube-system","uid":"6c0b10fd-fb34-4ae9-9dbe-c7548b0bd11a","resourceVersion":"392","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5f305b36979486f8420656cc79a6f159","kubernetes.io/config.mirror":"5f305b36979486f8420656cc79a6f159","kubernetes.io/config.seen":"2023-06-05T17:54:47.979832568Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0605 17:55:34.571515  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:34.571528  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.571537  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.571544  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.574016  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.574036  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.574045  471785 round_trippers.go:580]     Audit-Id: a24008f3-9b52-4562-a31e-ce821bbcb0fe
	I0605 17:55:34.574052  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.574059  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.574066  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.574072  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.574079  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.574226  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.574628  471785 pod_ready.go:92] pod "kube-controller-manager-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:55:34.574638  471785 pod_ready.go:81] duration metric: took 6.630694ms waiting for pod "kube-controller-manager-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.574650  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v8xlw" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.574706  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8xlw
	I0605 17:55:34.574710  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.574718  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.574741  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.577314  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.577337  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.577347  471785 round_trippers.go:580]     Audit-Id: a2a27c08-ac49-4735-8dd0-28b5f15a83c8
	I0605 17:55:34.577354  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.577362  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.577369  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.577378  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.577386  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.577618  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v8xlw","generateName":"kube-proxy-","namespace":"kube-system","uid":"b11f9e66-fb00-4b48-98cf-113fa1163e85","resourceVersion":"385","creationTimestamp":"2023-06-05T17:55:00Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8cfb31b2-4d2c-480e-9c1d-672453d426a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cfb31b2-4d2c-480e-9c1d-672453d426a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0605 17:55:34.627416  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:34.627439  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.627450  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.627457  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.630151  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.630191  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.630261  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.630274  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.630281  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.630288  471785 round_trippers.go:580]     Audit-Id: 4fd28249-443a-4ce6-abda-8b8b31db086f
	I0605 17:55:34.630299  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.630305  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.630430  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:34.630845  471785 pod_ready.go:92] pod "kube-proxy-v8xlw" in "kube-system" namespace has status "Ready":"True"
	I0605 17:55:34.630864  471785 pod_ready.go:81] duration metric: took 56.208415ms waiting for pod "kube-proxy-v8xlw" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.630875  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:34.827289  471785 request.go:628] Waited for 196.34776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-292850
	I0605 17:55:34.827356  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-292850
	I0605 17:55:34.827389  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:34.827422  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:34.827436  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:34.830137  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:34.830206  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:34.830229  471785 round_trippers.go:580]     Audit-Id: 8a64ea6d-cb38-488b-a161-ca7da43a8692
	I0605 17:55:34.830252  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:34.830290  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:34.830305  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:34.830313  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:34.830320  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:34 GMT
	I0605 17:55:34.830487  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-292850","namespace":"kube-system","uid":"d3d6371e-e9b5-4e31-8395-5c78f8fd0b10","resourceVersion":"389","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a27bbca9df3e6fc5b378037826433d3a","kubernetes.io/config.mirror":"a27bbca9df3e6fc5b378037826433d3a","kubernetes.io/config.seen":"2023-06-05T17:54:47.979823764Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0605 17:55:35.027173  471785 request.go:628] Waited for 196.248011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:35.027290  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:55:35.027304  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:35.027327  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:35.027337  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:35.030064  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:35.030140  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:35.030163  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:35.030186  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:35.030226  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:35.030243  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:35.030258  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:35 GMT
	I0605 17:55:35.030266  471785 round_trippers.go:580]     Audit-Id: fec8a2c3-6b6b-434b-b796-9a552c207366
	I0605 17:55:35.030428  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:55:35.030870  471785 pod_ready.go:92] pod "kube-scheduler-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:55:35.030888  471785 pod_ready.go:81] duration metric: took 400.005729ms waiting for pod "kube-scheduler-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:55:35.030919  471785 pod_ready.go:38] duration metric: took 2.00095924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:55:35.030949  471785 api_server.go:52] waiting for apiserver process to appear ...
	I0605 17:55:35.031042  471785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 17:55:35.045156  471785 command_runner.go:130] > 1230
	I0605 17:55:35.045209  471785 api_server.go:72] duration metric: took 33.10894127s to wait for apiserver process to appear ...
	I0605 17:55:35.045220  471785 api_server.go:88] waiting for apiserver healthz status ...
	I0605 17:55:35.045241  471785 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0605 17:55:35.055246  471785 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0605 17:55:35.055319  471785 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0605 17:55:35.055328  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:35.055337  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:35.055349  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:35.056636  471785 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0605 17:55:35.056660  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:35.056669  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:35.056677  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:35.056684  471785 round_trippers.go:580]     Content-Length: 263
	I0605 17:55:35.056691  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:35 GMT
	I0605 17:55:35.056698  471785 round_trippers.go:580]     Audit-Id: 6b2917d8-4b58-4c65-b61d-b02feaa98639
	I0605 17:55:35.056705  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:35.056715  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:35.056735  471785 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0605 17:55:35.056825  471785 api_server.go:141] control plane version: v1.27.2
	I0605 17:55:35.056855  471785 api_server.go:131] duration metric: took 11.62459ms to wait for apiserver health ...
	I0605 17:55:35.056863  471785 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 17:55:35.227228  471785 request.go:628] Waited for 170.301499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:55:35.227300  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:55:35.227311  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:35.227321  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:35.227328  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:35.231158  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:35.231234  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:35.231257  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:35.231284  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:35 GMT
	I0605 17:55:35.231321  471785 round_trippers.go:580]     Audit-Id: 9492404c-fddf-44d0-a64a-098d1015304e
	I0605 17:55:35.231337  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:35.231345  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:35.231352  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:35.231826  471785 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"420","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0605 17:55:35.234261  471785 system_pods.go:59] 8 kube-system pods found
	I0605 17:55:35.234296  471785 system_pods.go:61] "coredns-5d78c9869d-g9m8h" [de5aab07-b3ba-4a99-8384-9958e4f604b3] Running
	I0605 17:55:35.234302  471785 system_pods.go:61] "etcd-multinode-292850" [9851a436-29a1-4ee7-b3b0-ab3afbdeb909] Running
	I0605 17:55:35.234307  471785 system_pods.go:61] "kindnet-wm5x2" [4cc771cc-ca35-492b-baa7-37f03a3cc7c0] Running
	I0605 17:55:35.234312  471785 system_pods.go:61] "kube-apiserver-multinode-292850" [93831e67-92d5-43b4-9c66-5bce71b7550b] Running
	I0605 17:55:35.234318  471785 system_pods.go:61] "kube-controller-manager-multinode-292850" [6c0b10fd-fb34-4ae9-9dbe-c7548b0bd11a] Running
	I0605 17:55:35.234323  471785 system_pods.go:61] "kube-proxy-v8xlw" [b11f9e66-fb00-4b48-98cf-113fa1163e85] Running
	I0605 17:55:35.234329  471785 system_pods.go:61] "kube-scheduler-multinode-292850" [d3d6371e-e9b5-4e31-8395-5c78f8fd0b10] Running
	I0605 17:55:35.234339  471785 system_pods.go:61] "storage-provisioner" [4675df19-daf8-44d2-992e-6f6be51be7da] Running
	I0605 17:55:35.234344  471785 system_pods.go:74] duration metric: took 177.477045ms to wait for pod list to return data ...
	I0605 17:55:35.234360  471785 default_sa.go:34] waiting for default service account to be created ...
	I0605 17:55:35.426766  471785 request.go:628] Waited for 192.333211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0605 17:55:35.426836  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0605 17:55:35.426848  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:35.426857  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:35.426876  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:35.429559  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:35.429591  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:35.429600  471785 round_trippers.go:580]     Audit-Id: 76747e9b-f6d1-4c3f-9a1d-06d189eb5c6f
	I0605 17:55:35.429608  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:35.429614  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:35.429621  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:35.429628  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:35.429635  471785 round_trippers.go:580]     Content-Length: 261
	I0605 17:55:35.429641  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:35 GMT
	I0605 17:55:35.429671  471785 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"00dad799-b714-4e4b-87b9-d3e981dddd5b","resourceVersion":"297","creationTimestamp":"2023-06-05T17:55:00Z"}}]}
	I0605 17:55:35.429898  471785 default_sa.go:45] found service account: "default"
	I0605 17:55:35.429916  471785 default_sa.go:55] duration metric: took 195.549936ms for default service account to be created ...
	I0605 17:55:35.429926  471785 system_pods.go:116] waiting for k8s-apps to be running ...
	I0605 17:55:35.627318  471785 request.go:628] Waited for 197.328937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:55:35.627412  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:55:35.627425  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:35.627435  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:35.627443  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:35.631365  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:35.631391  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:35.631400  471785 round_trippers.go:580]     Audit-Id: 1a97ee17-57b8-4d08-9905-2d391d24ea55
	I0605 17:55:35.631408  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:35.631414  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:35.631421  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:35.631428  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:35.631435  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:35 GMT
	I0605 17:55:35.631843  471785 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"420","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0605 17:55:35.634803  471785 system_pods.go:86] 8 kube-system pods found
	I0605 17:55:35.634834  471785 system_pods.go:89] "coredns-5d78c9869d-g9m8h" [de5aab07-b3ba-4a99-8384-9958e4f604b3] Running
	I0605 17:55:35.634843  471785 system_pods.go:89] "etcd-multinode-292850" [9851a436-29a1-4ee7-b3b0-ab3afbdeb909] Running
	I0605 17:55:35.634849  471785 system_pods.go:89] "kindnet-wm5x2" [4cc771cc-ca35-492b-baa7-37f03a3cc7c0] Running
	I0605 17:55:35.634864  471785 system_pods.go:89] "kube-apiserver-multinode-292850" [93831e67-92d5-43b4-9c66-5bce71b7550b] Running
	I0605 17:55:35.634873  471785 system_pods.go:89] "kube-controller-manager-multinode-292850" [6c0b10fd-fb34-4ae9-9dbe-c7548b0bd11a] Running
	I0605 17:55:35.634879  471785 system_pods.go:89] "kube-proxy-v8xlw" [b11f9e66-fb00-4b48-98cf-113fa1163e85] Running
	I0605 17:55:35.634891  471785 system_pods.go:89] "kube-scheduler-multinode-292850" [d3d6371e-e9b5-4e31-8395-5c78f8fd0b10] Running
	I0605 17:55:35.634897  471785 system_pods.go:89] "storage-provisioner" [4675df19-daf8-44d2-992e-6f6be51be7da] Running
	I0605 17:55:35.634910  471785 system_pods.go:126] duration metric: took 204.97649ms to wait for k8s-apps to be running ...
	I0605 17:55:35.634919  471785 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 17:55:35.634982  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:55:35.652647  471785 system_svc.go:56] duration metric: took 17.718176ms WaitForService to wait for kubelet.
	I0605 17:55:35.652674  471785 kubeadm.go:581] duration metric: took 33.716410468s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0605 17:55:35.652698  471785 node_conditions.go:102] verifying NodePressure condition ...
	I0605 17:55:35.827046  471785 request.go:628] Waited for 174.263937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0605 17:55:35.827121  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0605 17:55:35.827132  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:35.827142  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:35.827149  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:35.829649  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:35.829677  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:35.829687  471785 round_trippers.go:580]     Audit-Id: d85b9a87-3f32-4fab-82ad-ff7828e9af94
	I0605 17:55:35.829703  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:35.829711  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:35.829718  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:35.829731  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:35.829740  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:35 GMT
	I0605 17:55:35.830243  471785 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0605 17:55:35.830734  471785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 17:55:35.830759  471785 node_conditions.go:123] node cpu capacity is 2
	I0605 17:55:35.830772  471785 node_conditions.go:105] duration metric: took 178.06874ms to run NodePressure ...
	I0605 17:55:35.830784  471785 start.go:228] waiting for startup goroutines ...
	I0605 17:55:35.830798  471785 start.go:233] waiting for cluster config update ...
	I0605 17:55:35.830808  471785 start.go:242] writing updated cluster config ...
	I0605 17:55:35.833223  471785 out.go:177] 
	I0605 17:55:35.834981  471785 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:55:35.835083  471785 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/config.json ...
	I0605 17:55:35.837199  471785 out.go:177] * Starting worker node multinode-292850-m02 in cluster multinode-292850
	I0605 17:55:35.838958  471785 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:55:35.840657  471785 out.go:177] * Pulling base image ...
	I0605 17:55:35.842404  471785 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:55:35.842437  471785 cache.go:57] Caching tarball of preloaded images
	I0605 17:55:35.842458  471785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:55:35.842720  471785 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 17:55:35.842766  471785 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 17:55:35.842940  471785 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/config.json ...
	I0605 17:55:35.860610  471785 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 17:55:35.860638  471785 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 17:55:35.860662  471785 cache.go:195] Successfully downloaded all kic artifacts
	I0605 17:55:35.860691  471785 start.go:364] acquiring machines lock for multinode-292850-m02: {Name:mk1de5b0a3fe16acd308af7b6be29c04aca290be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 17:55:35.860822  471785 start.go:368] acquired machines lock for "multinode-292850-m02" in 107.905µs
	I0605 17:55:35.860853  471785 start.go:93] Provisioning new machine with config: &{Name:multinode-292850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0605 17:55:35.860948  471785 start.go:125] createHost starting for "m02" (driver="docker")
	I0605 17:55:35.863107  471785 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0605 17:55:35.863230  471785 start.go:159] libmachine.API.Create for "multinode-292850" (driver="docker")
	I0605 17:55:35.863252  471785 client.go:168] LocalClient.Create starting
	I0605 17:55:35.863328  471785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem
	I0605 17:55:35.863366  471785 main.go:141] libmachine: Decoding PEM data...
	I0605 17:55:35.863387  471785 main.go:141] libmachine: Parsing certificate...
	I0605 17:55:35.863445  471785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem
	I0605 17:55:35.863466  471785 main.go:141] libmachine: Decoding PEM data...
	I0605 17:55:35.863480  471785 main.go:141] libmachine: Parsing certificate...
	I0605 17:55:35.863741  471785 cli_runner.go:164] Run: docker network inspect multinode-292850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:55:35.884019  471785 network_create.go:76] Found existing network {name:multinode-292850 subnet:0x40017636b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0605 17:55:35.884067  471785 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-292850-m02" container
	I0605 17:55:35.884141  471785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0605 17:55:35.902010  471785 cli_runner.go:164] Run: docker volume create multinode-292850-m02 --label name.minikube.sigs.k8s.io=multinode-292850-m02 --label created_by.minikube.sigs.k8s.io=true
	I0605 17:55:35.920782  471785 oci.go:103] Successfully created a docker volume multinode-292850-m02
	I0605 17:55:35.920883  471785 cli_runner.go:164] Run: docker run --rm --name multinode-292850-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-292850-m02 --entrypoint /usr/bin/test -v multinode-292850-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -d /var/lib
	I0605 17:55:36.508363  471785 oci.go:107] Successfully prepared a docker volume multinode-292850-m02
	I0605 17:55:36.508392  471785 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:55:36.508413  471785 kic.go:190] Starting extracting preloaded images to volume ...
	I0605 17:55:36.508501  471785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-292850-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir
	I0605 17:55:40.642991  471785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-292850-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f -I lz4 -xf /preloaded.tar -C /extractDir: (4.134453106s)
	I0605 17:55:40.643024  471785 kic.go:199] duration metric: took 4.134607 seconds to extract preloaded images to volume
	W0605 17:55:40.643157  471785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0605 17:55:40.643297  471785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0605 17:55:40.704823  471785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-292850-m02 --name multinode-292850-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-292850-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-292850-m02 --network multinode-292850 --ip 192.168.58.3 --volume multinode-292850-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f
	I0605 17:55:41.055652  471785 cli_runner.go:164] Run: docker container inspect multinode-292850-m02 --format={{.State.Running}}
	I0605 17:55:41.087996  471785 cli_runner.go:164] Run: docker container inspect multinode-292850-m02 --format={{.State.Status}}
	I0605 17:55:41.122792  471785 cli_runner.go:164] Run: docker exec multinode-292850-m02 stat /var/lib/dpkg/alternatives/iptables
	I0605 17:55:41.197113  471785 oci.go:144] the created container "multinode-292850-m02" has a running status.
	I0605 17:55:41.197140  471785 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa...
	I0605 17:55:41.789282  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0605 17:55:41.789374  471785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0605 17:55:41.817096  471785 cli_runner.go:164] Run: docker container inspect multinode-292850-m02 --format={{.State.Status}}
	I0605 17:55:41.844491  471785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0605 17:55:41.844511  471785 kic_runner.go:114] Args: [docker exec --privileged multinode-292850-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0605 17:55:41.974971  471785 cli_runner.go:164] Run: docker container inspect multinode-292850-m02 --format={{.State.Status}}
	I0605 17:55:42.011713  471785 machine.go:88] provisioning docker machine ...
	I0605 17:55:42.011751  471785 ubuntu.go:169] provisioning hostname "multinode-292850-m02"
	I0605 17:55:42.011824  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:42.049577  471785 main.go:141] libmachine: Using SSH client type: native
	I0605 17:55:42.050051  471785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0605 17:55:42.050072  471785 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-292850-m02 && echo "multinode-292850-m02" | sudo tee /etc/hostname
	I0605 17:55:42.250038  471785 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-292850-m02
	
	I0605 17:55:42.250203  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:42.275543  471785 main.go:141] libmachine: Using SSH client type: native
	I0605 17:55:42.276014  471785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0605 17:55:42.276037  471785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-292850-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-292850-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-292850-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 17:55:42.429781  471785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 17:55:42.429804  471785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 17:55:42.429821  471785 ubuntu.go:177] setting up certificates
	I0605 17:55:42.429829  471785 provision.go:83] configureAuth start
	I0605 17:55:42.429887  471785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850-m02
	I0605 17:55:42.459195  471785 provision.go:138] copyHostCerts
	I0605 17:55:42.459234  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 17:55:42.459267  471785 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 17:55:42.459273  471785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 17:55:42.459363  471785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 17:55:42.459443  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 17:55:42.459460  471785 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 17:55:42.459466  471785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 17:55:42.459494  471785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 17:55:42.459534  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 17:55:42.459558  471785 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 17:55:42.459563  471785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 17:55:42.459588  471785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 17:55:42.459630  471785 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.multinode-292850-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-292850-m02]
	I0605 17:55:43.023101  471785 provision.go:172] copyRemoteCerts
	I0605 17:55:43.023176  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 17:55:43.023253  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:43.042214  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa Username:docker}
	I0605 17:55:43.147393  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0605 17:55:43.147497  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 17:55:43.177823  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0605 17:55:43.177887  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0605 17:55:43.210950  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0605 17:55:43.211023  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 17:55:43.249874  471785 provision.go:86] duration metric: configureAuth took 820.03032ms
	I0605 17:55:43.249904  471785 ubuntu.go:193] setting minikube options for container-runtime
	I0605 17:55:43.250116  471785 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:55:43.250240  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:43.269430  471785 main.go:141] libmachine: Using SSH client type: native
	I0605 17:55:43.269870  471785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0605 17:55:43.269891  471785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 17:55:43.535522  471785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 17:55:43.535548  471785 machine.go:91] provisioned docker machine in 1.523814008s
	I0605 17:55:43.535558  471785 client.go:171] LocalClient.Create took 7.672301206s
	I0605 17:55:43.535573  471785 start.go:167] duration metric: libmachine.API.Create for "multinode-292850" took 7.672343314s
	I0605 17:55:43.535580  471785 start.go:300] post-start starting for "multinode-292850-m02" (driver="docker")
	I0605 17:55:43.535586  471785 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 17:55:43.535654  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 17:55:43.535701  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:43.557611  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa Username:docker}
	I0605 17:55:43.659755  471785 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 17:55:43.664201  471785 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0605 17:55:43.664222  471785 command_runner.go:130] > NAME="Ubuntu"
	I0605 17:55:43.664230  471785 command_runner.go:130] > VERSION_ID="22.04"
	I0605 17:55:43.664236  471785 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0605 17:55:43.664242  471785 command_runner.go:130] > VERSION_CODENAME=jammy
	I0605 17:55:43.664246  471785 command_runner.go:130] > ID=ubuntu
	I0605 17:55:43.664251  471785 command_runner.go:130] > ID_LIKE=debian
	I0605 17:55:43.664257  471785 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0605 17:55:43.664262  471785 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0605 17:55:43.664270  471785 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0605 17:55:43.664281  471785 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0605 17:55:43.664290  471785 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0605 17:55:43.664381  471785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 17:55:43.664422  471785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 17:55:43.664439  471785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 17:55:43.664446  471785 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 17:55:43.664459  471785 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 17:55:43.664524  471785 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 17:55:43.664613  471785 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 17:55:43.664625  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> /etc/ssl/certs/4078132.pem
	I0605 17:55:43.664735  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 17:55:43.675853  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 17:55:43.706812  471785 start.go:303] post-start completed in 171.216844ms
	I0605 17:55:43.707268  471785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850-m02
	I0605 17:55:43.725529  471785 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/config.json ...
	I0605 17:55:43.725816  471785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:55:43.725870  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:43.746414  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa Username:docker}
	I0605 17:55:43.843256  471785 command_runner.go:130] > 16%!
	(MISSING)I0605 17:55:43.843907  471785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 17:55:43.849493  471785 command_runner.go:130] > 164G
	I0605 17:55:43.849986  471785 start.go:128] duration metric: createHost completed in 7.989025912s
	I0605 17:55:43.850032  471785 start.go:83] releasing machines lock for "multinode-292850-m02", held for 7.989196825s
	I0605 17:55:43.850138  471785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850-m02
	I0605 17:55:43.876729  471785 out.go:177] * Found network options:
	I0605 17:55:43.879040  471785 out.go:177]   - NO_PROXY=192.168.58.2
	W0605 17:55:43.881328  471785 proxy.go:119] fail to check proxy env: Error ip not in block
	W0605 17:55:43.881378  471785 proxy.go:119] fail to check proxy env: Error ip not in block
	I0605 17:55:43.881456  471785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 17:55:43.881504  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:43.881778  471785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 17:55:43.881845  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:55:43.912091  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa Username:docker}
	I0605 17:55:43.913633  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa Username:docker}
	I0605 17:55:44.188648  471785 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0605 17:55:44.192226  471785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 17:55:44.197993  471785 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0605 17:55:44.198019  471785 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0605 17:55:44.198027  471785 command_runner.go:130] > Device: b3h/179d	Inode: 3638838     Links: 1
	I0605 17:55:44.198043  471785 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0605 17:55:44.198071  471785 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0605 17:55:44.198082  471785 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0605 17:55:44.198088  471785 command_runner.go:130] > Change: 2023-06-05 17:31:00.544911935 +0000
	I0605 17:55:44.198106  471785 command_runner.go:130] >  Birth: 2023-06-05 17:31:00.544911935 +0000
	I0605 17:55:44.198438  471785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:55:44.227911  471785 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 17:55:44.228126  471785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 17:55:44.274658  471785 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0605 17:55:44.274687  471785 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0605 17:55:44.274695  471785 start.go:481] detecting cgroup driver to use...
	I0605 17:55:44.274746  471785 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 17:55:44.274815  471785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 17:55:44.297708  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 17:55:44.313078  471785 docker.go:193] disabling cri-docker service (if available) ...
	I0605 17:55:44.313149  471785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 17:55:44.333072  471785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 17:55:44.352013  471785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 17:55:44.480941  471785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 17:55:44.596558  471785 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0605 17:55:44.596592  471785 docker.go:209] disabling docker service ...
	I0605 17:55:44.596646  471785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 17:55:44.624500  471785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 17:55:44.639120  471785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 17:55:44.748673  471785 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0605 17:55:44.748808  471785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 17:55:44.765556  471785 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0605 17:55:44.862609  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 17:55:44.879146  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 17:55:44.899257  471785 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0605 17:55:44.900719  471785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0605 17:55:44.900794  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:55:44.913587  471785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 17:55:44.913657  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:55:44.926677  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:55:44.939206  471785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 17:55:44.951988  471785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 17:55:44.964117  471785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 17:55:44.974280  471785 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0605 17:55:44.975466  471785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 17:55:44.986372  471785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 17:55:45.131188  471785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 17:55:45.315905  471785 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 17:55:45.316084  471785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 17:55:45.323306  471785 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0605 17:55:45.323332  471785 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0605 17:55:45.323349  471785 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0605 17:55:45.323357  471785 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0605 17:55:45.323363  471785 command_runner.go:130] > Access: 2023-06-05 17:55:45.292230373 +0000
	I0605 17:55:45.323371  471785 command_runner.go:130] > Modify: 2023-06-05 17:55:45.292230373 +0000
	I0605 17:55:45.323377  471785 command_runner.go:130] > Change: 2023-06-05 17:55:45.292230373 +0000
	I0605 17:55:45.323382  471785 command_runner.go:130] >  Birth: -
	I0605 17:55:45.323648  471785 start.go:549] Will wait 60s for crictl version
	I0605 17:55:45.323708  471785 ssh_runner.go:195] Run: which crictl
	I0605 17:55:45.331457  471785 command_runner.go:130] > /usr/bin/crictl
	I0605 17:55:45.331844  471785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 17:55:45.381681  471785 command_runner.go:130] > Version:  0.1.0
	I0605 17:55:45.381706  471785 command_runner.go:130] > RuntimeName:  cri-o
	I0605 17:55:45.381712  471785 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0605 17:55:45.381719  471785 command_runner.go:130] > RuntimeApiVersion:  v1
	I0605 17:55:45.385706  471785 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 17:55:45.385823  471785 ssh_runner.go:195] Run: crio --version
	I0605 17:55:45.441110  471785 command_runner.go:130] > crio version 1.24.5
	I0605 17:55:45.441178  471785 command_runner.go:130] > Version:          1.24.5
	I0605 17:55:45.441202  471785 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0605 17:55:45.441223  471785 command_runner.go:130] > GitTreeState:     clean
	I0605 17:55:45.441251  471785 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0605 17:55:45.441272  471785 command_runner.go:130] > GoVersion:        go1.18.2
	I0605 17:55:45.441294  471785 command_runner.go:130] > Compiler:         gc
	I0605 17:55:45.441315  471785 command_runner.go:130] > Platform:         linux/arm64
	I0605 17:55:45.441346  471785 command_runner.go:130] > Linkmode:         dynamic
	I0605 17:55:45.441373  471785 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0605 17:55:45.441394  471785 command_runner.go:130] > SeccompEnabled:   true
	I0605 17:55:45.441423  471785 command_runner.go:130] > AppArmorEnabled:  false
	I0605 17:55:45.442871  471785 ssh_runner.go:195] Run: crio --version
	I0605 17:55:45.493632  471785 command_runner.go:130] > crio version 1.24.5
	I0605 17:55:45.493658  471785 command_runner.go:130] > Version:          1.24.5
	I0605 17:55:45.493670  471785 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0605 17:55:45.493675  471785 command_runner.go:130] > GitTreeState:     clean
	I0605 17:55:45.493716  471785 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0605 17:55:45.493726  471785 command_runner.go:130] > GoVersion:        go1.18.2
	I0605 17:55:45.493733  471785 command_runner.go:130] > Compiler:         gc
	I0605 17:55:45.493738  471785 command_runner.go:130] > Platform:         linux/arm64
	I0605 17:55:45.493747  471785 command_runner.go:130] > Linkmode:         dynamic
	I0605 17:55:45.493756  471785 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0605 17:55:45.493762  471785 command_runner.go:130] > SeccompEnabled:   true
	I0605 17:55:45.493767  471785 command_runner.go:130] > AppArmorEnabled:  false
	I0605 17:55:45.498348  471785 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0605 17:55:45.500058  471785 out.go:177]   - env NO_PROXY=192.168.58.2
	I0605 17:55:45.502180  471785 cli_runner.go:164] Run: docker network inspect multinode-292850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 17:55:45.525075  471785 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0605 17:55:45.530084  471785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:55:45.544437  471785 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850 for IP: 192.168.58.3
	I0605 17:55:45.544470  471785 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 17:55:45.544638  471785 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 17:55:45.544687  471785 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 17:55:45.544706  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0605 17:55:45.544724  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0605 17:55:45.544740  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0605 17:55:45.544752  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0605 17:55:45.544809  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem (1338 bytes)
	W0605 17:55:45.544846  471785 certs.go:433] ignoring /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813_empty.pem, impossibly tiny 0 bytes
	I0605 17:55:45.544859  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 17:55:45.544889  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 17:55:45.544918  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 17:55:45.544946  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 17:55:45.544993  471785 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 17:55:45.545029  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:55:45.545046  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem -> /usr/share/ca-certificates/407813.pem
	I0605 17:55:45.545058  471785 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> /usr/share/ca-certificates/4078132.pem
	I0605 17:55:45.545440  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 17:55:45.575936  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 17:55:45.609013  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 17:55:45.646670  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 17:55:45.678239  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 17:55:45.709117  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem --> /usr/share/ca-certificates/407813.pem (1338 bytes)
	I0605 17:55:45.740281  471785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /usr/share/ca-certificates/4078132.pem (1708 bytes)
	I0605 17:55:45.770420  471785 ssh_runner.go:195] Run: openssl version
	I0605 17:55:45.777404  471785 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0605 17:55:45.777777  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 17:55:45.790014  471785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:55:45.795149  471785 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:55:45.795248  471785 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:55:45.795320  471785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 17:55:45.804336  471785 command_runner.go:130] > b5213941
	I0605 17:55:45.804758  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 17:55:45.817041  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407813.pem && ln -fs /usr/share/ca-certificates/407813.pem /etc/ssl/certs/407813.pem"
	I0605 17:55:45.829977  471785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407813.pem
	I0605 17:55:45.834733  471785 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 17:55:45.834842  471785 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 17:55:45.834908  471785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407813.pem
	I0605 17:55:45.844157  471785 command_runner.go:130] > 51391683
	I0605 17:55:45.844237  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407813.pem /etc/ssl/certs/51391683.0"
	I0605 17:55:45.856391  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4078132.pem && ln -fs /usr/share/ca-certificates/4078132.pem /etc/ssl/certs/4078132.pem"
	I0605 17:55:45.868601  471785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4078132.pem
	I0605 17:55:45.879737  471785 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 17:55:45.879873  471785 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 17:55:45.879979  471785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4078132.pem
	I0605 17:55:45.888759  471785 command_runner.go:130] > 3ec20f2e
	I0605 17:55:45.889130  471785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4078132.pem /etc/ssl/certs/3ec20f2e.0"
	I0605 17:55:45.901556  471785 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 17:55:45.906454  471785 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:55:45.906584  471785 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0605 17:55:45.906703  471785 ssh_runner.go:195] Run: crio config
	I0605 17:55:45.960885  471785 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0605 17:55:45.960912  471785 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0605 17:55:45.960921  471785 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0605 17:55:45.960925  471785 command_runner.go:130] > #
	I0605 17:55:45.960943  471785 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0605 17:55:45.960956  471785 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0605 17:55:45.960967  471785 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0605 17:55:45.960978  471785 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0605 17:55:45.960986  471785 command_runner.go:130] > # reload'.
	I0605 17:55:45.960994  471785 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0605 17:55:45.961012  471785 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0605 17:55:45.961024  471785 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0605 17:55:45.961031  471785 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0605 17:55:45.961036  471785 command_runner.go:130] > [crio]
	I0605 17:55:45.961047  471785 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0605 17:55:45.961055  471785 command_runner.go:130] > # containers images, in this directory.
	I0605 17:55:45.961067  471785 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0605 17:55:45.961075  471785 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0605 17:55:45.961100  471785 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0605 17:55:45.961111  471785 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0605 17:55:45.961119  471785 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0605 17:55:45.961330  471785 command_runner.go:130] > # storage_driver = "vfs"
	I0605 17:55:45.961375  471785 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0605 17:55:45.961389  471785 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0605 17:55:45.961394  471785 command_runner.go:130] > # storage_option = [
	I0605 17:55:45.961403  471785 command_runner.go:130] > # ]
	I0605 17:55:45.961411  471785 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0605 17:55:45.961419  471785 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0605 17:55:45.961632  471785 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0605 17:55:45.961648  471785 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0605 17:55:45.961666  471785 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0605 17:55:45.961677  471785 command_runner.go:130] > # always happen on a node reboot
	I0605 17:55:45.961683  471785 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0605 17:55:45.961690  471785 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0605 17:55:45.961702  471785 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0605 17:55:45.961718  471785 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0605 17:55:45.961727  471785 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0605 17:55:45.961750  471785 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0605 17:55:45.961766  471785 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0605 17:55:45.961772  471785 command_runner.go:130] > # internal_wipe = true
	I0605 17:55:45.961783  471785 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0605 17:55:45.961794  471785 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0605 17:55:45.961801  471785 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0605 17:55:45.961818  471785 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0605 17:55:45.961826  471785 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0605 17:55:45.961834  471785 command_runner.go:130] > [crio.api]
	I0605 17:55:45.961840  471785 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0605 17:55:45.961846  471785 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0605 17:55:45.961853  471785 command_runner.go:130] > # IP address on which the stream server will listen.
	I0605 17:55:45.961858  471785 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0605 17:55:45.961869  471785 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0605 17:55:45.961886  471785 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0605 17:55:45.961892  471785 command_runner.go:130] > # stream_port = "0"
	I0605 17:55:45.961903  471785 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0605 17:55:45.961910  471785 command_runner.go:130] > # stream_enable_tls = false
	I0605 17:55:45.961917  471785 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0605 17:55:45.961926  471785 command_runner.go:130] > # stream_idle_timeout = ""
	I0605 17:55:45.961934  471785 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0605 17:55:45.961941  471785 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0605 17:55:45.961951  471785 command_runner.go:130] > # minutes.
	I0605 17:55:45.961961  471785 command_runner.go:130] > # stream_tls_cert = ""
	I0605 17:55:45.961971  471785 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0605 17:55:45.961982  471785 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0605 17:55:45.961990  471785 command_runner.go:130] > # stream_tls_key = ""
	I0605 17:55:45.962000  471785 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0605 17:55:45.962008  471785 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0605 17:55:45.962018  471785 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0605 17:55:45.962023  471785 command_runner.go:130] > # stream_tls_ca = ""
	I0605 17:55:45.962043  471785 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0605 17:55:45.962049  471785 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0605 17:55:45.962060  471785 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0605 17:55:45.962069  471785 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0605 17:55:45.962099  471785 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0605 17:55:45.962116  471785 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0605 17:55:45.962122  471785 command_runner.go:130] > [crio.runtime]
	I0605 17:55:45.962135  471785 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0605 17:55:45.962142  471785 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0605 17:55:45.962147  471785 command_runner.go:130] > # "nofile=1024:2048"
	I0605 17:55:45.962157  471785 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0605 17:55:45.962163  471785 command_runner.go:130] > # default_ulimits = [
	I0605 17:55:45.962172  471785 command_runner.go:130] > # ]
	I0605 17:55:45.962186  471785 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0605 17:55:45.962194  471785 command_runner.go:130] > # no_pivot = false
	I0605 17:55:45.962201  471785 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0605 17:55:45.962214  471785 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0605 17:55:45.962220  471785 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0605 17:55:45.962231  471785 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0605 17:55:45.962237  471785 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0605 17:55:45.962246  471785 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0605 17:55:45.962260  471785 command_runner.go:130] > # conmon = ""
	I0605 17:55:45.962267  471785 command_runner.go:130] > # Cgroup setting for conmon
	I0605 17:55:45.962278  471785 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0605 17:55:45.962286  471785 command_runner.go:130] > conmon_cgroup = "pod"
	I0605 17:55:45.962294  471785 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0605 17:55:45.962305  471785 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0605 17:55:45.962313  471785 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0605 17:55:45.962323  471785 command_runner.go:130] > # conmon_env = [
	I0605 17:55:45.962327  471785 command_runner.go:130] > # ]
	I0605 17:55:45.962345  471785 command_runner.go:130] > # Additional environment variables to set for all the
	I0605 17:55:45.962351  471785 command_runner.go:130] > # containers. These are overridden if set in the
	I0605 17:55:45.962358  471785 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0605 17:55:45.962366  471785 command_runner.go:130] > # default_env = [
	I0605 17:55:45.962370  471785 command_runner.go:130] > # ]
	I0605 17:55:45.962379  471785 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0605 17:55:45.962387  471785 command_runner.go:130] > # selinux = false
	I0605 17:55:45.962395  471785 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0605 17:55:45.962412  471785 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0605 17:55:45.962424  471785 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0605 17:55:45.962430  471785 command_runner.go:130] > # seccomp_profile = ""
	I0605 17:55:45.962443  471785 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0605 17:55:45.962451  471785 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0605 17:55:45.962459  471785 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0605 17:55:45.962466  471785 command_runner.go:130] > # which might increase security.
	I0605 17:55:45.962659  471785 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0605 17:55:45.962676  471785 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0605 17:55:45.962685  471785 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0605 17:55:45.962706  471785 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0605 17:55:45.962719  471785 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0605 17:55:45.962725  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:55:45.962735  471785 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0605 17:55:45.962743  471785 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0605 17:55:45.962753  471785 command_runner.go:130] > # the cgroup blockio controller.
	I0605 17:55:45.962759  471785 command_runner.go:130] > # blockio_config_file = ""
	I0605 17:55:45.962767  471785 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0605 17:55:45.962783  471785 command_runner.go:130] > # irqbalance daemon.
	I0605 17:55:45.962792  471785 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0605 17:55:45.962801  471785 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0605 17:55:45.962811  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:55:45.962816  471785 command_runner.go:130] > # rdt_config_file = ""
	I0605 17:55:45.962823  471785 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0605 17:55:45.962832  471785 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0605 17:55:45.962839  471785 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0605 17:55:45.962851  471785 command_runner.go:130] > # separate_pull_cgroup = ""
	I0605 17:55:45.962862  471785 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0605 17:55:45.962873  471785 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0605 17:55:45.962881  471785 command_runner.go:130] > # will be added.
	I0605 17:55:45.962887  471785 command_runner.go:130] > # default_capabilities = [
	I0605 17:55:45.962891  471785 command_runner.go:130] > # 	"CHOWN",
	I0605 17:55:45.962896  471785 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0605 17:55:45.962903  471785 command_runner.go:130] > # 	"FSETID",
	I0605 17:55:45.963099  471785 command_runner.go:130] > # 	"FOWNER",
	I0605 17:55:45.963113  471785 command_runner.go:130] > # 	"SETGID",
	I0605 17:55:45.963119  471785 command_runner.go:130] > # 	"SETUID",
	I0605 17:55:45.963133  471785 command_runner.go:130] > # 	"SETPCAP",
	I0605 17:55:45.963143  471785 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0605 17:55:45.963148  471785 command_runner.go:130] > # 	"KILL",
	I0605 17:55:45.963152  471785 command_runner.go:130] > # ]
	I0605 17:55:45.963163  471785 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0605 17:55:45.963175  471785 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0605 17:55:45.963181  471785 command_runner.go:130] > # add_inheritable_capabilities = true
	I0605 17:55:45.963193  471785 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0605 17:55:45.963208  471785 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0605 17:55:45.963215  471785 command_runner.go:130] > # default_sysctls = [
	I0605 17:55:45.963220  471785 command_runner.go:130] > # ]
	I0605 17:55:45.963233  471785 command_runner.go:130] > # List of devices on the host that a
	I0605 17:55:45.963246  471785 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0605 17:55:45.963256  471785 command_runner.go:130] > # allowed_devices = [
	I0605 17:55:45.963265  471785 command_runner.go:130] > # 	"/dev/fuse",
	I0605 17:55:45.963269  471785 command_runner.go:130] > # ]
	I0605 17:55:45.963282  471785 command_runner.go:130] > # List of additional devices. specified as
	I0605 17:55:45.963308  471785 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0605 17:55:45.963318  471785 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0605 17:55:45.963328  471785 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0605 17:55:45.963337  471785 command_runner.go:130] > # additional_devices = [
	I0605 17:55:45.963342  471785 command_runner.go:130] > # ]
	I0605 17:55:45.963349  471785 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0605 17:55:45.963364  471785 command_runner.go:130] > # cdi_spec_dirs = [
	I0605 17:55:45.963369  471785 command_runner.go:130] > # 	"/etc/cdi",
	I0605 17:55:45.963380  471785 command_runner.go:130] > # 	"/var/run/cdi",
	I0605 17:55:45.963384  471785 command_runner.go:130] > # ]
	I0605 17:55:45.963392  471785 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0605 17:55:45.963407  471785 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0605 17:55:45.963413  471785 command_runner.go:130] > # Defaults to false.
	I0605 17:55:45.963419  471785 command_runner.go:130] > # device_ownership_from_security_context = false
	I0605 17:55:45.963443  471785 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0605 17:55:45.963463  471785 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0605 17:55:45.963469  471785 command_runner.go:130] > # hooks_dir = [
	I0605 17:55:45.963485  471785 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0605 17:55:45.963490  471785 command_runner.go:130] > # ]
	I0605 17:55:45.963498  471785 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0605 17:55:45.963518  471785 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0605 17:55:45.963533  471785 command_runner.go:130] > # its default mounts from the following two files:
	I0605 17:55:45.963539  471785 command_runner.go:130] > #
	I0605 17:55:45.963546  471785 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0605 17:55:45.963556  471785 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0605 17:55:45.963566  471785 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0605 17:55:45.963571  471785 command_runner.go:130] > #
	I0605 17:55:45.963578  471785 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0605 17:55:45.963595  471785 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0605 17:55:45.963604  471785 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0605 17:55:45.963613  471785 command_runner.go:130] > #      only add mounts it finds in this file.
	I0605 17:55:45.963617  471785 command_runner.go:130] > #
	I0605 17:55:45.963624  471785 command_runner.go:130] > # default_mounts_file = ""
	I0605 17:55:45.963631  471785 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0605 17:55:45.963641  471785 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0605 17:55:45.963873  471785 command_runner.go:130] > # pids_limit = 0
	I0605 17:55:45.963890  471785 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0605 17:55:45.963909  471785 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0605 17:55:45.963930  471785 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0605 17:55:45.963941  471785 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0605 17:55:45.963946  471785 command_runner.go:130] > # log_size_max = -1
	I0605 17:55:45.963956  471785 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0605 17:55:45.963965  471785 command_runner.go:130] > # log_to_journald = false
	I0605 17:55:45.963973  471785 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0605 17:55:45.963993  471785 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0605 17:55:45.964005  471785 command_runner.go:130] > # Path to directory for container attach sockets.
	I0605 17:55:45.964011  471785 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0605 17:55:45.964018  471785 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0605 17:55:45.964028  471785 command_runner.go:130] > # bind_mount_prefix = ""
	I0605 17:55:45.964037  471785 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0605 17:55:45.964045  471785 command_runner.go:130] > # read_only = false
	I0605 17:55:45.964097  471785 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0605 17:55:45.964111  471785 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0605 17:55:45.964117  471785 command_runner.go:130] > # live configuration reload.
	I0605 17:55:45.964124  471785 command_runner.go:130] > # log_level = "info"
	I0605 17:55:45.964138  471785 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0605 17:55:45.964146  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:55:45.964155  471785 command_runner.go:130] > # log_filter = ""
	I0605 17:55:45.964170  471785 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0605 17:55:45.964181  471785 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0605 17:55:45.964186  471785 command_runner.go:130] > # separated by comma.
	I0605 17:55:45.964191  471785 command_runner.go:130] > # uid_mappings = ""
	I0605 17:55:45.964199  471785 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0605 17:55:45.964208  471785 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0605 17:55:45.964215  471785 command_runner.go:130] > # separated by comma.
	I0605 17:55:45.964226  471785 command_runner.go:130] > # gid_mappings = ""
	I0605 17:55:45.964234  471785 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0605 17:55:45.964254  471785 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0605 17:55:45.964268  471785 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0605 17:55:45.964274  471785 command_runner.go:130] > # minimum_mappable_uid = -1
	I0605 17:55:45.964286  471785 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0605 17:55:45.964293  471785 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0605 17:55:45.964301  471785 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0605 17:55:45.964559  471785 command_runner.go:130] > # minimum_mappable_gid = -1
	I0605 17:55:45.964580  471785 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0605 17:55:45.964589  471785 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0605 17:55:45.964599  471785 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0605 17:55:45.964604  471785 command_runner.go:130] > # ctr_stop_timeout = 30
	I0605 17:55:45.964613  471785 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0605 17:55:45.964631  471785 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0605 17:55:45.964641  471785 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0605 17:55:45.964648  471785 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0605 17:55:45.964658  471785 command_runner.go:130] > # drop_infra_ctr = true
	I0605 17:55:45.964666  471785 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0605 17:55:45.964673  471785 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0605 17:55:45.964684  471785 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0605 17:55:45.964692  471785 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0605 17:55:45.964708  471785 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0605 17:55:45.964719  471785 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0605 17:55:45.964725  471785 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0605 17:55:45.964734  471785 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0605 17:55:45.964742  471785 command_runner.go:130] > # pinns_path = ""
	I0605 17:55:45.964751  471785 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0605 17:55:45.964763  471785 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0605 17:55:45.964782  471785 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0605 17:55:45.964799  471785 command_runner.go:130] > # default_runtime = "runc"
	I0605 17:55:45.964806  471785 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0605 17:55:45.964819  471785 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0605 17:55:45.964838  471785 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0605 17:55:45.964859  471785 command_runner.go:130] > # creation as a file is not desired either.
	I0605 17:55:45.964875  471785 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0605 17:55:45.964883  471785 command_runner.go:130] > # the hostname is being managed dynamically.
	I0605 17:55:45.964892  471785 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0605 17:55:45.964896  471785 command_runner.go:130] > # ]
	I0605 17:55:45.964906  471785 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0605 17:55:45.964924  471785 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0605 17:55:45.964932  471785 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0605 17:55:45.964944  471785 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0605 17:55:45.964948  471785 command_runner.go:130] > #
	I0605 17:55:45.964954  471785 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0605 17:55:45.964964  471785 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0605 17:55:45.964969  471785 command_runner.go:130] > #  runtime_type = "oci"
	I0605 17:55:45.964975  471785 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0605 17:55:45.964985  471785 command_runner.go:130] > #  privileged_without_host_devices = false
	I0605 17:55:45.964991  471785 command_runner.go:130] > #  allowed_annotations = []
	I0605 17:55:45.965000  471785 command_runner.go:130] > # Where:
	I0605 17:55:45.965011  471785 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0605 17:55:45.965019  471785 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0605 17:55:45.965027  471785 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0605 17:55:45.965037  471785 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0605 17:55:45.965042  471785 command_runner.go:130] > #   in $PATH.
	I0605 17:55:45.965050  471785 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0605 17:55:45.965060  471785 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0605 17:55:45.965075  471785 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0605 17:55:45.965084  471785 command_runner.go:130] > #   state.
	I0605 17:55:45.965092  471785 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0605 17:55:45.965099  471785 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0605 17:55:45.965107  471785 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0605 17:55:45.965114  471785 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0605 17:55:45.965125  471785 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0605 17:55:45.965136  471785 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0605 17:55:45.965150  471785 command_runner.go:130] > #   The currently recognized values are:
	I0605 17:55:45.965158  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0605 17:55:45.965171  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0605 17:55:45.965178  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0605 17:55:45.965186  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0605 17:55:45.965195  471785 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0605 17:55:45.965206  471785 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0605 17:55:45.965214  471785 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0605 17:55:45.965303  471785 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0605 17:55:45.965319  471785 command_runner.go:130] > #   should be moved to the container's cgroup
	I0605 17:55:45.965326  471785 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0605 17:55:45.965332  471785 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0605 17:55:45.965338  471785 command_runner.go:130] > runtime_type = "oci"
	I0605 17:55:45.965348  471785 command_runner.go:130] > runtime_root = "/run/runc"
	I0605 17:55:45.965353  471785 command_runner.go:130] > runtime_config_path = ""
	I0605 17:55:45.965358  471785 command_runner.go:130] > monitor_path = ""
	I0605 17:55:45.965363  471785 command_runner.go:130] > monitor_cgroup = ""
	I0605 17:55:45.965375  471785 command_runner.go:130] > monitor_exec_cgroup = ""
	I0605 17:55:45.965393  471785 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0605 17:55:45.965402  471785 command_runner.go:130] > # running containers
	I0605 17:55:45.965408  471785 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0605 17:55:45.965416  471785 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0605 17:55:45.965428  471785 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0605 17:55:45.965436  471785 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0605 17:55:45.965451  471785 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0605 17:55:45.965457  471785 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0605 17:55:45.965463  471785 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0605 17:55:45.965470  471785 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0605 17:55:45.965480  471785 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0605 17:55:45.965486  471785 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0605 17:55:45.965494  471785 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0605 17:55:45.965504  471785 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0605 17:55:45.965512  471785 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0605 17:55:45.965532  471785 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0605 17:55:45.965543  471785 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0605 17:55:45.965550  471785 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0605 17:55:45.965561  471785 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0605 17:55:45.965574  471785 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0605 17:55:45.965582  471785 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0605 17:55:45.965599  471785 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0605 17:55:45.965607  471785 command_runner.go:130] > # Example:
	I0605 17:55:45.965613  471785 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0605 17:55:45.965619  471785 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0605 17:55:45.965631  471785 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0605 17:55:45.965637  471785 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0605 17:55:45.965643  471785 command_runner.go:130] > # cpuset = 0
	I0605 17:55:45.965648  471785 command_runner.go:130] > # cpushares = "0-1"
	I0605 17:55:45.965657  471785 command_runner.go:130] > # Where:
	I0605 17:55:45.965662  471785 command_runner.go:130] > # The workload name is workload-type.
	I0605 17:55:45.965677  471785 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0605 17:55:45.965687  471785 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0605 17:55:45.965695  471785 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0605 17:55:45.965705  471785 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0605 17:55:45.965715  471785 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0605 17:55:45.965720  471785 command_runner.go:130] > # 
	I0605 17:55:45.965728  471785 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0605 17:55:45.965732  471785 command_runner.go:130] > #
	I0605 17:55:45.965746  471785 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0605 17:55:45.965757  471785 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0605 17:55:45.965766  471785 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0605 17:55:45.965779  471785 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0605 17:55:45.965787  471785 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0605 17:55:45.965795  471785 command_runner.go:130] > [crio.image]
	I0605 17:55:45.965803  471785 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0605 17:55:45.966011  471785 command_runner.go:130] > # default_transport = "docker://"
	I0605 17:55:45.966064  471785 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0605 17:55:45.966074  471785 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0605 17:55:45.966079  471785 command_runner.go:130] > # global_auth_file = ""
	I0605 17:55:45.966086  471785 command_runner.go:130] > # The image used to instantiate infra containers.
	I0605 17:55:45.966092  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:55:45.966098  471785 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0605 17:55:45.966110  471785 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0605 17:55:45.966128  471785 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0605 17:55:45.966138  471785 command_runner.go:130] > # This option supports live configuration reload.
	I0605 17:55:45.966144  471785 command_runner.go:130] > # pause_image_auth_file = ""
	I0605 17:55:45.966154  471785 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0605 17:55:45.966162  471785 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0605 17:55:45.966169  471785 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0605 17:55:45.966177  471785 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0605 17:55:45.966184  471785 command_runner.go:130] > # pause_command = "/pause"
	I0605 17:55:45.966198  471785 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0605 17:55:45.966212  471785 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0605 17:55:45.966222  471785 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0605 17:55:45.966232  471785 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0605 17:55:45.966238  471785 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0605 17:55:45.966246  471785 command_runner.go:130] > # signature_policy = ""
	I0605 17:55:45.966254  471785 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0605 17:55:45.966262  471785 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0605 17:55:45.966273  471785 command_runner.go:130] > # changing them here.
	I0605 17:55:45.966282  471785 command_runner.go:130] > # insecure_registries = [
	I0605 17:55:45.966287  471785 command_runner.go:130] > # ]
	I0605 17:55:45.966294  471785 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0605 17:55:45.966303  471785 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0605 17:55:45.966309  471785 command_runner.go:130] > # image_volumes = "mkdir"
	I0605 17:55:45.966317  471785 command_runner.go:130] > # Temporary directory to use for storing big files
	I0605 17:55:45.966322  471785 command_runner.go:130] > # big_files_temporary_dir = ""
	I0605 17:55:45.966332  471785 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0605 17:55:45.966336  471785 command_runner.go:130] > # CNI plugins.
	I0605 17:55:45.966346  471785 command_runner.go:130] > [crio.network]
	I0605 17:55:45.966356  471785 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0605 17:55:45.966365  471785 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0605 17:55:45.966371  471785 command_runner.go:130] > # cni_default_network = ""
	I0605 17:55:45.966380  471785 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0605 17:55:45.966386  471785 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0605 17:55:45.966396  471785 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0605 17:55:45.966401  471785 command_runner.go:130] > # plugin_dirs = [
	I0605 17:55:45.966425  471785 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0605 17:55:45.966438  471785 command_runner.go:130] > # ]
	I0605 17:55:45.966446  471785 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0605 17:55:45.966456  471785 command_runner.go:130] > [crio.metrics]
	I0605 17:55:45.966464  471785 command_runner.go:130] > # Globally enable or disable metrics support.
	I0605 17:55:45.966771  471785 command_runner.go:130] > # enable_metrics = false
	I0605 17:55:45.966789  471785 command_runner.go:130] > # Specify enabled metrics collectors.
	I0605 17:55:45.966796  471785 command_runner.go:130] > # Per default all metrics are enabled.
	I0605 17:55:45.966804  471785 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0605 17:55:45.966829  471785 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0605 17:55:45.966842  471785 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0605 17:55:45.966848  471785 command_runner.go:130] > # metrics_collectors = [
	I0605 17:55:45.966852  471785 command_runner.go:130] > # 	"operations",
	I0605 17:55:45.966861  471785 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0605 17:55:45.966873  471785 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0605 17:55:45.966878  471785 command_runner.go:130] > # 	"operations_errors",
	I0605 17:55:45.966884  471785 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0605 17:55:45.966903  471785 command_runner.go:130] > # 	"image_pulls_by_name",
	I0605 17:55:45.966915  471785 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0605 17:55:45.966921  471785 command_runner.go:130] > # 	"image_pulls_failures",
	I0605 17:55:45.966930  471785 command_runner.go:130] > # 	"image_pulls_successes",
	I0605 17:55:45.966935  471785 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0605 17:55:45.966942  471785 command_runner.go:130] > # 	"image_layer_reuse",
	I0605 17:55:45.966948  471785 command_runner.go:130] > # 	"containers_oom_total",
	I0605 17:55:45.966955  471785 command_runner.go:130] > # 	"containers_oom",
	I0605 17:55:45.966961  471785 command_runner.go:130] > # 	"processes_defunct",
	I0605 17:55:45.966982  471785 command_runner.go:130] > # 	"operations_total",
	I0605 17:55:45.966997  471785 command_runner.go:130] > # 	"operations_latency_seconds",
	I0605 17:55:45.967003  471785 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0605 17:55:45.967011  471785 command_runner.go:130] > # 	"operations_errors_total",
	I0605 17:55:45.967016  471785 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0605 17:55:45.967024  471785 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0605 17:55:45.967031  471785 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0605 17:55:45.967037  471785 command_runner.go:130] > # 	"image_pulls_success_total",
	I0605 17:55:45.967086  471785 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0605 17:55:45.967099  471785 command_runner.go:130] > # 	"containers_oom_count_total",
	I0605 17:55:45.967104  471785 command_runner.go:130] > # ]
	I0605 17:55:45.967111  471785 command_runner.go:130] > # The port on which the metrics server will listen.
	I0605 17:55:45.967120  471785 command_runner.go:130] > # metrics_port = 9090
	I0605 17:55:45.967128  471785 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0605 17:55:45.967133  471785 command_runner.go:130] > # metrics_socket = ""
	I0605 17:55:45.967139  471785 command_runner.go:130] > # The certificate for the secure metrics server.
	I0605 17:55:45.967170  471785 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0605 17:55:45.967184  471785 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0605 17:55:45.967191  471785 command_runner.go:130] > # certificate on any modification event.
	I0605 17:55:45.967199  471785 command_runner.go:130] > # metrics_cert = ""
	I0605 17:55:45.967205  471785 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0605 17:55:45.967213  471785 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0605 17:55:45.967218  471785 command_runner.go:130] > # metrics_key = ""
	I0605 17:55:45.967226  471785 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0605 17:55:45.967245  471785 command_runner.go:130] > [crio.tracing]
	I0605 17:55:45.967257  471785 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0605 17:55:45.967263  471785 command_runner.go:130] > # enable_tracing = false
	I0605 17:55:45.967273  471785 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0605 17:55:45.967278  471785 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0605 17:55:45.967287  471785 command_runner.go:130] > # Number of samples to collect per million spans.
	I0605 17:55:45.967293  471785 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0605 17:55:45.967301  471785 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0605 17:55:45.967315  471785 command_runner.go:130] > [crio.stats]
	I0605 17:55:45.967326  471785 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0605 17:55:45.967334  471785 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0605 17:55:45.967342  471785 command_runner.go:130] > # stats_collection_period = 0
	I0605 17:55:45.969456  471785 command_runner.go:130] ! time="2023-06-05 17:55:45.958195361Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0605 17:55:45.969485  471785 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0605 17:55:45.969551  471785 cni.go:84] Creating CNI manager for ""
	I0605 17:55:45.969564  471785 cni.go:136] 2 nodes found, recommending kindnet
	I0605 17:55:45.969572  471785 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 17:55:45.969592  471785 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-292850 NodeName:multinode-292850-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 17:55:45.969750  471785 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-292850-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 17:55:45.969824  471785 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-292850-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 17:55:45.969915  471785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0605 17:55:45.981234  471785 command_runner.go:130] > kubeadm
	I0605 17:55:45.981256  471785 command_runner.go:130] > kubectl
	I0605 17:55:45.981261  471785 command_runner.go:130] > kubelet
	I0605 17:55:45.981275  471785 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 17:55:45.981329  471785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0605 17:55:45.992300  471785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0605 17:55:46.018832  471785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 17:55:46.044020  471785 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0605 17:55:46.049171  471785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 17:55:46.064942  471785 host.go:66] Checking if "multinode-292850" exists ...
	I0605 17:55:46.065214  471785 start.go:301] JoinCluster: &{Name:multinode-292850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-292850 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:55:46.065311  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0605 17:55:46.065362  471785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:55:46.065756  471785 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:55:46.084915  471785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:55:46.266723  471785 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zcfbxu.zb8veqdbe5inaxjr --discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 
	I0605 17:55:46.266778  471785 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0605 17:55:46.266814  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zcfbxu.zb8veqdbe5inaxjr --discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-292850-m02"
	I0605 17:55:46.312731  471785 command_runner.go:130] > [preflight] Running pre-flight checks
	I0605 17:55:46.349840  471785 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0605 17:55:46.349861  471785 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-aws
	I0605 17:55:46.349868  471785 command_runner.go:130] > OS: Linux
	I0605 17:55:46.349874  471785 command_runner.go:130] > CGROUPS_CPU: enabled
	I0605 17:55:46.349881  471785 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0605 17:55:46.349887  471785 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0605 17:55:46.349893  471785 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0605 17:55:46.349899  471785 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0605 17:55:46.349905  471785 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0605 17:55:46.349915  471785 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0605 17:55:46.349921  471785 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0605 17:55:46.349929  471785 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0605 17:55:46.474905  471785 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0605 17:55:46.474932  471785 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0605 17:55:46.507155  471785 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0605 17:55:46.507505  471785 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0605 17:55:46.507522  471785 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0605 17:55:46.618397  471785 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0605 17:55:49.635585  471785 command_runner.go:130] > This node has joined the cluster:
	I0605 17:55:49.635607  471785 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0605 17:55:49.635616  471785 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0605 17:55:49.635624  471785 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0605 17:55:49.638865  471785 command_runner.go:130] ! W0605 17:55:46.312131    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0605 17:55:49.638898  471785 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-aws\n", err: exit status 1
	I0605 17:55:49.638912  471785 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0605 17:55:49.638925  471785 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zcfbxu.zb8veqdbe5inaxjr --discovery-token-ca-cert-hash sha256:4e18d8ca6d78476699449d3972f71851a29312a8d61265b02534e66f98373210 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-292850-m02": (3.372094286s)
	I0605 17:55:49.638941  471785 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0605 17:55:49.898352  471785 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0605 17:55:49.898382  471785 start.go:303] JoinCluster complete in 3.833166329s
	I0605 17:55:49.898394  471785 cni.go:84] Creating CNI manager for ""
	I0605 17:55:49.898400  471785 cni.go:136] 2 nodes found, recommending kindnet
	I0605 17:55:49.898486  471785 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0605 17:55:49.903423  471785 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0605 17:55:49.903446  471785 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0605 17:55:49.903454  471785 command_runner.go:130] > Device: 3ah/58d	Inode: 3642593     Links: 1
	I0605 17:55:49.903462  471785 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0605 17:55:49.903469  471785 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0605 17:55:49.903475  471785 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0605 17:55:49.903481  471785 command_runner.go:130] > Change: 2023-06-05 17:31:01.224910109 +0000
	I0605 17:55:49.903487  471785 command_runner.go:130] >  Birth: 2023-06-05 17:31:01.180910227 +0000
	I0605 17:55:49.903591  471785 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0605 17:55:49.903605  471785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0605 17:55:49.930712  471785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0605 17:55:50.312837  471785 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0605 17:55:50.319799  471785 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0605 17:55:50.329348  471785 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0605 17:55:50.359415  471785 command_runner.go:130] > daemonset.apps/kindnet configured
	I0605 17:55:50.367622  471785 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:55:50.367965  471785 kapi.go:59] client config for multinode-292850: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:55:50.368343  471785 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0605 17:55:50.368354  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:50.368373  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:50.368380  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:50.372473  471785 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0605 17:55:50.372553  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:50.372588  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:50.372631  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:50.372663  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:50.372683  471785 round_trippers.go:580]     Content-Length: 291
	I0605 17:55:50.372706  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:50 GMT
	I0605 17:55:50.372739  471785 round_trippers.go:580]     Audit-Id: fca64e97-abd2-4875-9303-f27dc27ca82a
	I0605 17:55:50.372761  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:50.372805  471785 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caff1eae-79ac-49ee-ac75-910d1f9235c3","resourceVersion":"425","creationTimestamp":"2023-06-05T17:54:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0605 17:55:50.372943  471785 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-292850" context rescaled to 1 replicas
	I0605 17:55:50.372996  471785 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0605 17:55:50.377418  471785 out.go:177] * Verifying Kubernetes components...
	I0605 17:55:50.379302  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:55:50.401566  471785 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:55:50.401858  471785 kapi.go:59] client config for multinode-292850: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/multinode-292850/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 17:55:50.402154  471785 node_ready.go:35] waiting up to 6m0s for node "multinode-292850-m02" to be "Ready" ...
	I0605 17:55:50.402228  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:50.402240  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:50.402250  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:50.402262  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:50.405127  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:50.405155  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:50.405164  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:50.405177  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:50.405184  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:50 GMT
	I0605 17:55:50.405191  471785 round_trippers.go:580]     Audit-Id: 969e9316-5241-402f-9f74-9d98d36466c0
	I0605 17:55:50.405198  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:50.405204  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:50.405722  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"463","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0605 17:55:50.907265  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:50.907289  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:50.907300  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:50.907308  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:50.910185  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:50.910210  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:50.910220  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:50.910228  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:50.910236  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:50 GMT
	I0605 17:55:50.910242  471785 round_trippers.go:580]     Audit-Id: 20ab5afe-6f9d-482d-b6bd-e19a685739bd
	I0605 17:55:50.910249  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:50.910256  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:50.910611  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:51.406721  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:51.406745  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:51.406756  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:51.406764  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:51.409595  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:51.409660  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:51.409684  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:51 GMT
	I0605 17:55:51.409707  471785 round_trippers.go:580]     Audit-Id: 6df06f8e-9a8e-4d81-8ddc-c5b164ddc0f4
	I0605 17:55:51.409746  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:51.409760  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:51.409767  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:51.409774  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:51.409896  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:51.906411  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:51.906441  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:51.906453  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:51.906461  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:51.915623  471785 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0605 17:55:51.915708  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:51.915731  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:51.915752  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:51 GMT
	I0605 17:55:51.915775  471785 round_trippers.go:580]     Audit-Id: 7b13fcd5-2eaa-41d8-a583-a8ca878efb6a
	I0605 17:55:51.915799  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:51.915823  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:51.915847  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:51.915983  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:52.406388  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:52.406413  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:52.406423  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:52.406430  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:52.409245  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:52.409311  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:52.409333  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:52 GMT
	I0605 17:55:52.409357  471785 round_trippers.go:580]     Audit-Id: 6e44e745-41ac-4f06-97ad-1d0739834ea5
	I0605 17:55:52.409395  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:52.409408  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:52.409415  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:52.409422  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:52.409596  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:52.410002  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:55:52.907124  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:52.907153  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:52.907168  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:52.907179  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:52.910259  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:55:52.910351  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:52.910389  471785 round_trippers.go:580]     Audit-Id: 820ce215-c108-4548-b7c5-efbb277f29d7
	I0605 17:55:52.910407  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:52.910415  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:52.910425  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:52.910450  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:52.910461  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:52 GMT
	I0605 17:55:52.910585  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:53.407012  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:53.407035  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:53.407046  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:53.407053  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:53.409555  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:53.409620  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:53.409642  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:53.409666  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:53 GMT
	I0605 17:55:53.409703  471785 round_trippers.go:580]     Audit-Id: 6cf432ab-4a0f-48aa-9203-ba37e1d9ed35
	I0605 17:55:53.409728  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:53.409742  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:53.409749  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:53.409871  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:53.906321  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:53.906343  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:53.906354  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:53.906362  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:53.908907  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:53.908930  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:53.908939  471785 round_trippers.go:580]     Audit-Id: 038017ac-f8c0-4a85-b278-dc8b22bc5561
	I0605 17:55:53.908947  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:53.908954  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:53.908961  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:53.908967  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:53.908974  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:53 GMT
	I0605 17:55:53.909268  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:54.406391  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:54.406416  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:54.406426  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:54.406434  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:54.409389  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:54.409420  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:54.409430  471785 round_trippers.go:580]     Audit-Id: 8811d01d-e566-49ff-82c7-978ed59c45fb
	I0605 17:55:54.409437  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:54.409443  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:54.409450  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:54.409457  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:54.409469  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:54 GMT
	I0605 17:55:54.409586  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:54.907156  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:54.907186  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:54.907197  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:54.907206  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:54.909896  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:54.909923  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:54.909933  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:54 GMT
	I0605 17:55:54.909940  471785 round_trippers.go:580]     Audit-Id: 290ca49b-3e44-4a4e-bf37-7b45767df791
	I0605 17:55:54.909947  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:54.909954  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:54.909961  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:54.909967  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:54.910115  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:54.910516  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:55:55.407261  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:55.407290  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:55.407301  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:55.407309  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:55.409878  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:55.409956  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:55.409992  471785 round_trippers.go:580]     Audit-Id: 824f174b-4fe7-417d-a6b8-7d3905291023
	I0605 17:55:55.410019  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:55.410041  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:55.410079  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:55.410105  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:55.410119  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:55 GMT
	I0605 17:55:55.410286  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:55.906611  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:55.906633  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:55.906644  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:55.906653  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:55.909272  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:55.909293  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:55.909302  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:55.909309  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:55.909316  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:55.909323  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:55 GMT
	I0605 17:55:55.909329  471785 round_trippers.go:580]     Audit-Id: 98d71d82-fc0c-436e-9a94-200da6289edf
	I0605 17:55:55.909336  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:55.909433  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:56.406356  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:56.406377  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:56.406388  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:56.406395  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:56.408957  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:56.408981  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:56.408990  471785 round_trippers.go:580]     Audit-Id: 3affc40d-70b2-43c9-b4ee-39149d6df120
	I0605 17:55:56.408997  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:56.409004  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:56.409011  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:56.409018  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:56.409028  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:56 GMT
	I0605 17:55:56.409311  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:56.906371  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:56.906399  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:56.906410  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:56.906418  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:56.908989  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:56.909016  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:56.909027  471785 round_trippers.go:580]     Audit-Id: e062dfdb-7848-41c6-871b-25097ab231f4
	I0605 17:55:56.909037  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:56.909044  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:56.909054  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:56.909064  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:56.909071  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:56 GMT
	I0605 17:55:56.909183  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:57.407369  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:57.407396  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:57.407407  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:57.407414  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:57.410162  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:57.410184  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:57.410193  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:57.410202  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:57.410210  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:57 GMT
	I0605 17:55:57.410216  471785 round_trippers.go:580]     Audit-Id: 9d37ca81-5704-4df3-b135-328ce2c689fb
	I0605 17:55:57.410223  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:57.410230  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:57.410338  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:57.410703  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:55:57.906369  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:57.906393  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:57.906405  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:57.906412  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:57.908951  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:57.908973  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:57.908984  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:57.908991  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:57.908998  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:57.909007  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:57.909014  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:57 GMT
	I0605 17:55:57.909022  471785 round_trippers.go:580]     Audit-Id: f888b0fc-916e-4262-9559-79f6569a3786
	I0605 17:55:57.909240  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:58.406348  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:58.406390  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:58.406401  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:58.406409  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:58.409035  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:58.409062  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:58.409071  471785 round_trippers.go:580]     Audit-Id: b68dc5e6-0b1c-40b9-bf66-15b669be5cb5
	I0605 17:55:58.409078  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:58.409085  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:58.409091  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:58.409098  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:58.409109  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:58 GMT
	I0605 17:55:58.409253  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:58.906330  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:58.906354  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:58.906364  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:58.906373  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:58.909067  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:58.909089  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:58.909099  471785 round_trippers.go:580]     Audit-Id: 7a6a12f8-7b85-4a3c-9bda-bef0cd7e51e8
	I0605 17:55:58.909106  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:58.909113  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:58.909120  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:58.909127  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:58.909134  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:58 GMT
	I0605 17:55:58.909236  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:59.406924  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:59.406946  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:59.406956  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:59.406963  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:59.409547  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:59.409568  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:59.409577  471785 round_trippers.go:580]     Audit-Id: 4d300e3a-c58e-4816-b002-71cf191a0c31
	I0605 17:55:59.409584  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:59.409590  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:59.409597  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:59.409603  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:59.409610  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:59 GMT
	I0605 17:55:59.409709  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"474","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0605 17:55:59.906777  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:55:59.906806  471785 round_trippers.go:469] Request Headers:
	I0605 17:55:59.906816  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:55:59.906824  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:55:59.909375  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:55:59.909404  471785 round_trippers.go:577] Response Headers:
	I0605 17:55:59.909414  471785 round_trippers.go:580]     Audit-Id: f9387bdb-6460-4b0b-996d-68bcd0863962
	I0605 17:55:59.909422  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:55:59.909429  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:55:59.909437  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:55:59.909445  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:55:59.909452  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:55:59 GMT
	I0605 17:55:59.909580  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:55:59.909977  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:00.406709  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:00.406737  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:00.406748  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:00.406756  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:00.409603  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:00.409661  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:00.409670  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:00 GMT
	I0605 17:56:00.409677  471785 round_trippers.go:580]     Audit-Id: 57aca156-bd40-4788-ba59-bc527550c135
	I0605 17:56:00.409684  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:00.409691  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:00.409697  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:00.409706  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:00.409844  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:00.906386  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:00.906409  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:00.906420  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:00.906428  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:00.909025  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:00.909049  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:00.909063  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:00.909070  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:00 GMT
	I0605 17:56:00.909077  471785 round_trippers.go:580]     Audit-Id: c326885e-3900-479d-b44b-71a4a161dd26
	I0605 17:56:00.909084  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:00.909091  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:00.909098  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:00.909434  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:01.407085  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:01.407133  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:01.407145  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:01.407160  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:01.409755  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:01.409781  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:01.409791  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:01.409798  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:01 GMT
	I0605 17:56:01.409805  471785 round_trippers.go:580]     Audit-Id: 6e0000a2-aadb-4119-8117-ecc9f9239562
	I0605 17:56:01.409812  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:01.409820  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:01.409827  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:01.410189  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:01.906931  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:01.906958  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:01.906968  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:01.906976  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:01.909847  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:01.909876  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:01.909885  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:01.909892  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:01.909899  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:01 GMT
	I0605 17:56:01.909906  471785 round_trippers.go:580]     Audit-Id: 028bf4a8-cb95-471a-ab48-7d4ca291d9d6
	I0605 17:56:01.909913  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:01.909922  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:01.910066  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:01.910509  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:02.406307  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:02.406332  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:02.406343  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:02.406350  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:02.409001  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:02.409029  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:02.409039  471785 round_trippers.go:580]     Audit-Id: 31897f37-5cb5-4547-97d1-5f33c72f0487
	I0605 17:56:02.409046  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:02.409053  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:02.409060  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:02.409071  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:02.409080  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:02 GMT
	I0605 17:56:02.409509  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:02.907255  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:02.907304  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:02.907315  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:02.907323  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:02.909992  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:02.910018  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:02.910028  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:02.910035  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:02 GMT
	I0605 17:56:02.910042  471785 round_trippers.go:580]     Audit-Id: bf1fa5d0-fa54-4707-9717-6321d1ae36ad
	I0605 17:56:02.910048  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:02.910057  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:02.910068  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:02.910168  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:03.406418  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:03.406443  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:03.406453  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:03.406461  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:03.409265  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:03.409287  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:03.409296  471785 round_trippers.go:580]     Audit-Id: 360d21de-1e70-4837-ba0b-6f484bf92247
	I0605 17:56:03.409303  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:03.409311  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:03.409317  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:03.409324  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:03.409333  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:03 GMT
	I0605 17:56:03.409464  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:03.906587  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:03.906610  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:03.906621  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:03.906629  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:03.909467  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:03.909494  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:03.909504  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:03.909511  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:03.909518  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:03.909524  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:03.909532  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:03 GMT
	I0605 17:56:03.909539  471785 round_trippers.go:580]     Audit-Id: e175eef1-9ffe-4d87-a6ec-3f70564078ad
	I0605 17:56:03.909918  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:04.406377  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:04.406421  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:04.406431  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:04.406439  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:04.409093  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:04.409146  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:04.409155  471785 round_trippers.go:580]     Audit-Id: eee3907c-fd96-4e9f-97c4-7a96bb18e2fa
	I0605 17:56:04.409163  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:04.409169  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:04.409176  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:04.409182  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:04.409190  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:04 GMT
	I0605 17:56:04.409363  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:04.409744  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:04.907029  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:04.907055  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:04.907066  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:04.907074  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:04.909631  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:04.909656  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:04.909665  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:04.909672  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:04.909680  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:04 GMT
	I0605 17:56:04.909686  471785 round_trippers.go:580]     Audit-Id: ed36fa19-12e5-467d-935a-56888ce452b6
	I0605 17:56:04.909693  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:04.909701  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:04.909789  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:05.406350  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:05.406373  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:05.406384  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:05.406392  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:05.409046  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:05.409068  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:05.409077  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:05 GMT
	I0605 17:56:05.409084  471785 round_trippers.go:580]     Audit-Id: 45deee0b-1043-4dfe-8ac5-278c1e5b42cc
	I0605 17:56:05.409091  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:05.409098  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:05.409120  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:05.409129  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:05.409255  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:05.906305  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:05.906334  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:05.906345  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:05.906353  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:05.909256  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:05.909279  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:05.909288  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:05.909296  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:05.909303  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:05.909310  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:05.909316  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:05 GMT
	I0605 17:56:05.909323  471785 round_trippers.go:580]     Audit-Id: 4f0f0859-ea21-4f6c-92b3-57235c24b43f
	I0605 17:56:05.909412  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:06.406396  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:06.406421  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:06.406432  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:06.406440  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:06.409392  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:06.409423  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:06.409434  471785 round_trippers.go:580]     Audit-Id: 51e6709a-8ae1-4072-8917-23bfd2ac94ed
	I0605 17:56:06.409441  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:06.409449  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:06.409456  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:06.409463  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:06.409470  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:06 GMT
	I0605 17:56:06.409589  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:06.409991  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:06.906359  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:06.906386  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:06.906401  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:06.906409  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:06.909128  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:06.909151  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:06.909160  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:06.909176  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:06.909184  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:06.909191  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:06.909197  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:06 GMT
	I0605 17:56:06.909204  471785 round_trippers.go:580]     Audit-Id: f938d7d8-9ed7-41f4-93cd-8da8b2a7a089
	I0605 17:56:06.909344  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:07.406399  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:07.406422  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:07.406433  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:07.406440  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:07.409029  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:07.409051  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:07.409060  471785 round_trippers.go:580]     Audit-Id: 338142d3-a427-4812-bd11-670e401502e9
	I0605 17:56:07.409067  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:07.409074  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:07.409080  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:07.409087  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:07.409094  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:07 GMT
	I0605 17:56:07.409246  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:07.906380  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:07.906405  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:07.906416  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:07.906424  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:07.909266  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:07.909288  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:07.909298  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:07 GMT
	I0605 17:56:07.909311  471785 round_trippers.go:580]     Audit-Id: 027173da-825d-4688-8f2c-335cc44c7b48
	I0605 17:56:07.909318  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:07.909324  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:07.909331  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:07.909337  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:07.909454  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:08.406356  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:08.406384  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:08.406395  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:08.406404  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:08.412884  471785 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0605 17:56:08.412989  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:08.413014  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:08 GMT
	I0605 17:56:08.413036  471785 round_trippers.go:580]     Audit-Id: 940b0ee2-7af2-472c-8de0-ea40aba5e0e0
	I0605 17:56:08.413074  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:08.413103  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:08.413126  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:08.413164  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:08.413313  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:08.413742  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:08.906607  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:08.906627  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:08.906637  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:08.906645  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:08.909343  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:08.909363  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:08.909372  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:08.909379  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:08.909386  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:08 GMT
	I0605 17:56:08.909393  471785 round_trippers.go:580]     Audit-Id: 956d55e1-8db8-4974-b10a-f01a5b4a51be
	I0605 17:56:08.909400  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:08.909407  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:08.909538  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:09.406650  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:09.406675  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:09.406685  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:09.406693  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:09.409441  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:09.409465  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:09.409475  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:09.409482  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:09.409489  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:09.409496  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:09 GMT
	I0605 17:56:09.409503  471785 round_trippers.go:580]     Audit-Id: 232ba994-0b6b-4d5b-a4e3-a0db0ee9f303
	I0605 17:56:09.409511  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:09.409628  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:09.906312  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:09.906334  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:09.906345  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:09.906352  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:09.908951  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:09.908980  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:09.908990  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:09.909002  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:09.909009  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:09.909017  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:09.909027  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:09 GMT
	I0605 17:56:09.909033  471785 round_trippers.go:580]     Audit-Id: 2455b3bb-4387-432c-a204-6dc072cd46e2
	I0605 17:56:09.909331  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:10.406994  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:10.407021  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:10.407031  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:10.407039  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:10.409606  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:10.409630  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:10.409641  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:10.409648  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:10.409655  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:10 GMT
	I0605 17:56:10.409662  471785 round_trippers.go:580]     Audit-Id: a6d17167-78dd-46f7-881f-ff5c159886bf
	I0605 17:56:10.409672  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:10.409685  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:10.409992  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:10.906685  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:10.906712  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:10.906723  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:10.906731  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:10.917104  471785 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0605 17:56:10.917129  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:10.917139  471785 round_trippers.go:580]     Audit-Id: 6ae01e4c-6846-47af-aadb-df7ede8548ed
	I0605 17:56:10.917146  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:10.917153  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:10.917159  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:10.917167  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:10.917180  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:10 GMT
	I0605 17:56:10.917306  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:10.917701  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:11.406393  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:11.406430  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:11.406448  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:11.406456  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:11.409100  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:11.409124  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:11.409134  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:11 GMT
	I0605 17:56:11.409141  471785 round_trippers.go:580]     Audit-Id: d329efdd-e47d-49b8-8d87-06fefed7de44
	I0605 17:56:11.409148  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:11.409154  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:11.409161  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:11.409168  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:11.409396  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:11.906328  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:11.906371  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:11.906381  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:11.906389  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:11.908990  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:11.909020  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:11.909030  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:11.909038  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:11.909045  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:11 GMT
	I0605 17:56:11.909052  471785 round_trippers.go:580]     Audit-Id: d900d020-e484-4f72-9a6d-4d268c09a11b
	I0605 17:56:11.909058  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:11.909066  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:11.909157  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:12.407321  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:12.407348  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:12.407359  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:12.407367  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:12.410140  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:12.410161  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:12.410170  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:12.410177  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:12.410185  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:12 GMT
	I0605 17:56:12.410192  471785 round_trippers.go:580]     Audit-Id: 0a84de92-a5ee-47d0-8f19-8f2ee5990916
	I0605 17:56:12.410198  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:12.410205  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:12.410302  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:12.906358  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:12.906383  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:12.906393  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:12.906401  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:12.909386  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:12.909436  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:12.909446  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:12.909453  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:12 GMT
	I0605 17:56:12.909460  471785 round_trippers.go:580]     Audit-Id: 52a1e173-26ff-4c31-b561-64abca64f6cb
	I0605 17:56:12.909467  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:12.909473  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:12.909480  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:12.909591  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:13.407219  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:13.407247  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:13.407258  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:13.407265  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:13.409779  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:13.409806  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:13.409815  471785 round_trippers.go:580]     Audit-Id: b7f8bd4e-2618-4f05-8d4b-ab842858463e
	I0605 17:56:13.409823  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:13.409829  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:13.409837  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:13.409845  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:13.409852  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:13 GMT
	I0605 17:56:13.409967  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:13.410351  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:13.906732  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:13.906758  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:13.906770  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:13.906777  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:13.913293  471785 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0605 17:56:13.913316  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:13.913325  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:13 GMT
	I0605 17:56:13.913332  471785 round_trippers.go:580]     Audit-Id: 75a98cae-fd3d-43dd-8c85-469abe7c5af6
	I0605 17:56:13.913339  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:13.913345  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:13.913352  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:13.913361  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:13.913457  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:14.406557  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:14.406580  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:14.406591  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:14.406599  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:14.409265  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:14.409294  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:14.409304  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:14.409311  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:14.409318  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:14.409325  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:14.409332  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:14 GMT
	I0605 17:56:14.409343  471785 round_trippers.go:580]     Audit-Id: 870d81eb-13ad-4a15-b30c-db12354f4d8b
	I0605 17:56:14.409457  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:14.906551  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:14.906577  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:14.906588  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:14.906595  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:14.909311  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:14.909338  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:14.909348  471785 round_trippers.go:580]     Audit-Id: 9fa2685d-2e36-4cc7-bb91-7d9c846d5c2a
	I0605 17:56:14.909356  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:14.909362  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:14.909369  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:14.909376  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:14.909389  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:14 GMT
	I0605 17:56:14.909664  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:15.406705  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:15.406732  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:15.406744  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:15.406751  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:15.409359  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:15.409381  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:15.409390  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:15.409397  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:15 GMT
	I0605 17:56:15.409404  471785 round_trippers.go:580]     Audit-Id: 686c9ee3-8a57-4aaa-981b-c4288b08edcf
	I0605 17:56:15.409411  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:15.409417  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:15.409426  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:15.409601  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:15.906989  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:15.907016  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:15.907026  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:15.907035  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:15.909707  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:15.909730  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:15.909741  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:15.909748  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:15.909755  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:15.909762  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:15.909769  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:15 GMT
	I0605 17:56:15.909776  471785 round_trippers.go:580]     Audit-Id: a8aab6fb-9614-4271-b8a1-afd1ad88a4b3
	I0605 17:56:15.909907  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:15.910286  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:16.407097  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:16.407123  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:16.407134  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:16.407142  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:16.409681  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:16.409703  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:16.409712  471785 round_trippers.go:580]     Audit-Id: 4f864a55-3a99-4ce3-af3c-253c3a84743c
	I0605 17:56:16.409719  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:16.409726  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:16.409732  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:16.409750  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:16.409760  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:16 GMT
	I0605 17:56:16.409859  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:16.907052  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:16.907078  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:16.907090  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:16.907097  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:16.909625  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:16.909651  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:16.909661  471785 round_trippers.go:580]     Audit-Id: 065e8da9-ba09-4867-93a5-3b9637ee36d3
	I0605 17:56:16.909669  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:16.909676  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:16.909683  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:16.909690  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:16.909697  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:16 GMT
	I0605 17:56:16.910018  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:17.406634  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:17.406658  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:17.406670  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:17.406678  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:17.409197  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:17.409218  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:17.409227  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:17.409234  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:17 GMT
	I0605 17:56:17.409241  471785 round_trippers.go:580]     Audit-Id: 6701b713-a5da-4593-a496-43338fb61457
	I0605 17:56:17.409247  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:17.409254  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:17.409260  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:17.409355  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:17.906383  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:17.906411  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:17.906422  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:17.906430  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:17.909137  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:17.909167  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:17.909177  471785 round_trippers.go:580]     Audit-Id: 38645cfd-0d04-4f57-b6ab-6e424c7eb89e
	I0605 17:56:17.909184  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:17.909191  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:17.909198  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:17.909205  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:17.909213  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:17 GMT
	I0605 17:56:17.909314  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:18.406355  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:18.406379  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:18.406390  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:18.406398  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:18.408950  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:18.408978  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:18.408988  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:18.408995  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:18.409002  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:18.409009  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:18 GMT
	I0605 17:56:18.409016  471785 round_trippers.go:580]     Audit-Id: d48f4ab9-ab3d-43c1-9ece-ab3a2e64846a
	I0605 17:56:18.409023  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:18.409510  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:18.409904  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:18.906978  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:18.907027  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:18.907038  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:18.907046  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:18.909801  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:18.909827  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:18.909836  471785 round_trippers.go:580]     Audit-Id: 7b1d779b-ccc2-4f33-8a9b-3c01e7290da1
	I0605 17:56:18.909844  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:18.909850  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:18.909857  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:18.909864  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:18.909872  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:18 GMT
	I0605 17:56:18.909971  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:19.406625  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:19.406647  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:19.406657  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:19.406665  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:19.409324  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:19.409355  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:19.409366  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:19.409374  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:19 GMT
	I0605 17:56:19.409381  471785 round_trippers.go:580]     Audit-Id: fc246765-1d5d-4c38-8b92-334bc7cee773
	I0605 17:56:19.409388  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:19.409395  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:19.409418  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:19.409538  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:19.906778  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:19.906800  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:19.906811  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:19.906818  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:19.909353  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:19.909378  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:19.909388  471785 round_trippers.go:580]     Audit-Id: 9d6e4b3d-ca90-4c7a-8cac-fa3834e74f80
	I0605 17:56:19.909395  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:19.909402  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:19.909408  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:19.909416  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:19.909423  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:19 GMT
	I0605 17:56:19.909536  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:20.407163  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:20.407187  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.407197  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.407205  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.409855  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.409882  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.409892  471785 round_trippers.go:580]     Audit-Id: 245d24ac-5335-46e4-be00-4c3e896b62f4
	I0605 17:56:20.409899  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.409906  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.409913  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.409920  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.409927  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.410037  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"488","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0605 17:56:20.410408  471785 node_ready.go:58] node "multinode-292850-m02" has status "Ready":"False"
	I0605 17:56:20.907306  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:20.907329  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.907339  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.907348  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.909980  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.910025  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.910034  471785 round_trippers.go:580]     Audit-Id: a51550a1-40f6-461a-a823-a0f2fcc0a610
	I0605 17:56:20.910042  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.910049  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.910056  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.910065  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.910072  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.910166  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"509","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5344 chars]
	I0605 17:56:20.910577  471785 node_ready.go:49] node "multinode-292850-m02" has status "Ready":"True"
	I0605 17:56:20.910595  471785 node_ready.go:38] duration metric: took 30.50842023s waiting for node "multinode-292850-m02" to be "Ready" ...
	I0605 17:56:20.910605  471785 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:56:20.910671  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0605 17:56:20.910681  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.910690  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.910701  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.914325  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:56:20.914352  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.914361  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.914368  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.914378  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.914385  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.914392  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.914400  471785 round_trippers.go:580]     Audit-Id: 77422154-3129-40ce-9d3b-0bbc8f287c20
	I0605 17:56:20.915037  471785 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"420","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I0605 17:56:20.918093  471785 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-g9m8h" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.918185  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-g9m8h
	I0605 17:56:20.918196  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.918206  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.918213  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.920826  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.920855  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.920865  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.920873  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.920880  471785 round_trippers.go:580]     Audit-Id: d4dec03c-552d-4981-974d-d8796d8a356d
	I0605 17:56:20.920888  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.920895  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.920901  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.921004  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-g9m8h","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"de5aab07-b3ba-4a99-8384-9958e4f604b3","resourceVersion":"420","creationTimestamp":"2023-06-05T17:55:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a0725fd4-795f-4b40-80b6-04fae54f5939","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a0725fd4-795f-4b40-80b6-04fae54f5939\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0605 17:56:20.921540  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:20.921555  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.921564  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.921574  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.924087  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.924111  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.924122  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.924129  471785 round_trippers.go:580]     Audit-Id: 93b8fc6d-3a76-4d72-bb2e-ddda930766dc
	I0605 17:56:20.924136  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.924142  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.924149  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.924160  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.924278  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:56:20.924690  471785 pod_ready.go:92] pod "coredns-5d78c9869d-g9m8h" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:20.924709  471785 pod_ready.go:81] duration metric: took 6.585622ms waiting for pod "coredns-5d78c9869d-g9m8h" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.924732  471785 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.924833  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-292850
	I0605 17:56:20.924842  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.924851  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.924860  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.927140  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.927162  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.927171  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.927178  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.927184  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.927192  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.927199  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.927209  471785 round_trippers.go:580]     Audit-Id: a0b36da9-1d62-460c-9fce-01e9541e97b4
	I0605 17:56:20.927293  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-292850","namespace":"kube-system","uid":"9851a436-29a1-4ee7-b3b0-ab3afbdeb909","resourceVersion":"390","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"5f348b4a3dbb4e3d988ba05637c6c0d9","kubernetes.io/config.mirror":"5f348b4a3dbb4e3d988ba05637c6c0d9","kubernetes.io/config.seen":"2023-06-05T17:54:47.979829524Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0605 17:56:20.927736  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:20.927751  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.927764  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.927774  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.929936  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.929970  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.929979  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.929985  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.929992  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.930003  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.930016  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.930031  471785 round_trippers.go:580]     Audit-Id: 3746d13b-4c5f-4d2b-a5f4-735d880fd619
	I0605 17:56:20.930310  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:56:20.930702  471785 pod_ready.go:92] pod "etcd-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:20.930720  471785 pod_ready.go:81] duration metric: took 5.967047ms waiting for pod "etcd-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.930737  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.930805  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-292850
	I0605 17:56:20.930813  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.930821  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.930828  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.933250  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.933272  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.933280  471785 round_trippers.go:580]     Audit-Id: e7f9a36e-c0e8-45c5-9d7f-382f0f8001ed
	I0605 17:56:20.933287  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.933294  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.933301  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.933308  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.933317  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.933465  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-292850","namespace":"kube-system","uid":"93831e67-92d5-43b4-9c66-5bce71b7550b","resourceVersion":"391","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7df81ebc702fe430d81216befbf43af3","kubernetes.io/config.mirror":"7df81ebc702fe430d81216befbf43af3","kubernetes.io/config.seen":"2023-06-05T17:54:47.979831288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0605 17:56:20.933998  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:20.934014  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.934023  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.934031  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.936312  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.936335  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.936344  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.936351  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.936366  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.936374  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.936383  471785 round_trippers.go:580]     Audit-Id: be8541f1-e23d-4e44-89ef-c2ba834c7329
	I0605 17:56:20.936394  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.936533  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:56:20.936917  471785 pod_ready.go:92] pod "kube-apiserver-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:20.936933  471785 pod_ready.go:81] duration metric: took 6.182628ms waiting for pod "kube-apiserver-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.936944  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.937003  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-292850
	I0605 17:56:20.937013  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.937021  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.937028  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.939291  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.939314  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.939322  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.939329  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.939335  471785 round_trippers.go:580]     Audit-Id: 915e63d3-75d9-44db-a528-1aa257849a91
	I0605 17:56:20.939347  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.939357  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.939364  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.939491  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-292850","namespace":"kube-system","uid":"6c0b10fd-fb34-4ae9-9dbe-c7548b0bd11a","resourceVersion":"392","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5f305b36979486f8420656cc79a6f159","kubernetes.io/config.mirror":"5f305b36979486f8420656cc79a6f159","kubernetes.io/config.seen":"2023-06-05T17:54:47.979832568Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0605 17:56:20.940016  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:20.940028  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:20.940036  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:20.940046  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:20.942137  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:20.942156  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:20.942165  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:20.942173  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:20.942179  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:20.942189  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:20.942200  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:20 GMT
	I0605 17:56:20.942207  471785 round_trippers.go:580]     Audit-Id: 516fbf8a-d47c-476f-b879-2988b6c33fc4
	I0605 17:56:20.942430  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:56:20.942811  471785 pod_ready.go:92] pod "kube-controller-manager-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:20.942829  471785 pod_ready.go:81] duration metric: took 5.874297ms waiting for pod "kube-controller-manager-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:20.942841  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v8xlw" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:21.108262  471785 request.go:628] Waited for 165.351339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8xlw
	I0605 17:56:21.108403  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8xlw
	I0605 17:56:21.108416  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:21.108427  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:21.108435  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:21.111463  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:56:21.111492  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:21.111502  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:21.111509  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:21 GMT
	I0605 17:56:21.111516  471785 round_trippers.go:580]     Audit-Id: 04e0b999-0940-470f-80e5-444ea420d6d0
	I0605 17:56:21.111523  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:21.111530  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:21.111537  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:21.111657  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v8xlw","generateName":"kube-proxy-","namespace":"kube-system","uid":"b11f9e66-fb00-4b48-98cf-113fa1163e85","resourceVersion":"385","creationTimestamp":"2023-06-05T17:55:00Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8cfb31b2-4d2c-480e-9c1d-672453d426a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cfb31b2-4d2c-480e-9c1d-672453d426a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0605 17:56:21.308015  471785 request.go:628] Waited for 195.761326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:21.308089  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:21.308102  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:21.308112  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:21.308121  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:21.311403  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:56:21.311428  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:21.311437  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:21.311471  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:21 GMT
	I0605 17:56:21.311485  471785 round_trippers.go:580]     Audit-Id: d2ffb36c-6b78-4d78-822c-008435690096
	I0605 17:56:21.311492  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:21.311505  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:21.311513  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:21.311656  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:56:21.312144  471785 pod_ready.go:92] pod "kube-proxy-v8xlw" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:21.312161  471785 pod_ready.go:81] duration metric: took 369.31102ms waiting for pod "kube-proxy-v8xlw" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:21.312173  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zln7p" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:21.507443  471785 request.go:628] Waited for 195.202378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zln7p
	I0605 17:56:21.507552  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zln7p
	I0605 17:56:21.507566  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:21.507576  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:21.507597  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:21.510341  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:21.510364  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:21.510373  471785 round_trippers.go:580]     Audit-Id: fa7bb8f8-08b9-477c-8e7e-af9163f624ac
	I0605 17:56:21.510381  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:21.510410  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:21.510423  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:21.510430  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:21.510437  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:21 GMT
	I0605 17:56:21.510759  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zln7p","generateName":"kube-proxy-","namespace":"kube-system","uid":"0bf80548-0ab0-4a95-93d3-1ecdc48d9dc1","resourceVersion":"475","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8cfb31b2-4d2c-480e-9c1d-672453d426a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cfb31b2-4d2c-480e-9c1d-672453d426a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5516 chars]
	I0605 17:56:21.707546  471785 request.go:628] Waited for 196.26206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:21.707605  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850-m02
	I0605 17:56:21.707612  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:21.707622  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:21.707634  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:21.710303  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:21.710332  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:21.710342  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:21.710351  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:21 GMT
	I0605 17:56:21.710361  471785 round_trippers.go:580]     Audit-Id: 7e750128-a7cf-4684-b524-6e42916aecb2
	I0605 17:56:21.710374  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:21.710386  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:21.710393  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:21.710522  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850-m02","uid":"b7680c21-c6e0-4b75-b659-e553651f26b4","resourceVersion":"510","creationTimestamp":"2023-06-05T17:55:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:55:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5258 chars]
	I0605 17:56:21.710946  471785 pod_ready.go:92] pod "kube-proxy-zln7p" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:21.710970  471785 pod_ready.go:81] duration metric: took 398.790104ms waiting for pod "kube-proxy-zln7p" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:21.710983  471785 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:21.907365  471785 request.go:628] Waited for 196.29758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-292850
	I0605 17:56:21.907437  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-292850
	I0605 17:56:21.907450  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:21.907466  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:21.907474  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:21.910587  471785 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0605 17:56:21.910614  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:21.910625  471785 round_trippers.go:580]     Audit-Id: 9ecab7f9-34ad-4862-afab-bbc0dc5aff73
	I0605 17:56:21.910632  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:21.910646  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:21.910653  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:21.910660  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:21.910672  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:21 GMT
	I0605 17:56:21.910790  471785 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-292850","namespace":"kube-system","uid":"d3d6371e-e9b5-4e31-8395-5c78f8fd0b10","resourceVersion":"389","creationTimestamp":"2023-06-05T17:54:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a27bbca9df3e6fc5b378037826433d3a","kubernetes.io/config.mirror":"a27bbca9df3e6fc5b378037826433d3a","kubernetes.io/config.seen":"2023-06-05T17:54:47.979823764Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-05T17:54:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0605 17:56:22.107624  471785 request.go:628] Waited for 196.386359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:22.107738  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-292850
	I0605 17:56:22.107751  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:22.107762  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:22.107769  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:22.110375  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:22.110401  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:22.110411  471785 round_trippers.go:580]     Audit-Id: 680e84eb-71a6-4536-a33e-661845dc7b70
	I0605 17:56:22.110426  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:22.110434  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:22.110440  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:22.110447  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:22.110454  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:22 GMT
	I0605 17:56:22.110812  471785 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-05T17:54:44Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0605 17:56:22.111271  471785 pod_ready.go:92] pod "kube-scheduler-multinode-292850" in "kube-system" namespace has status "Ready":"True"
	I0605 17:56:22.111292  471785 pod_ready.go:81] duration metric: took 400.296161ms waiting for pod "kube-scheduler-multinode-292850" in "kube-system" namespace to be "Ready" ...
	I0605 17:56:22.111305  471785 pod_ready.go:38] duration metric: took 1.200686859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 17:56:22.111322  471785 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 17:56:22.111389  471785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:56:22.125481  471785 system_svc.go:56] duration metric: took 14.15087ms WaitForService to wait for kubelet.
	I0605 17:56:22.125517  471785 kubeadm.go:581] duration metric: took 31.752008945s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0605 17:56:22.125539  471785 node_conditions.go:102] verifying NodePressure condition ...
	I0605 17:56:22.307978  471785 request.go:628] Waited for 182.339726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0605 17:56:22.308075  471785 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0605 17:56:22.308086  471785 round_trippers.go:469] Request Headers:
	I0605 17:56:22.308096  471785 round_trippers.go:473]     Accept: application/json, */*
	I0605 17:56:22.308104  471785 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0605 17:56:22.310909  471785 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0605 17:56:22.310938  471785 round_trippers.go:577] Response Headers:
	I0605 17:56:22.310992  471785 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1733ddf6-968a-4a27-a3b0-e72e393e4f4f
	I0605 17:56:22.311007  471785 round_trippers.go:580]     Date: Mon, 05 Jun 2023 17:56:22 GMT
	I0605 17:56:22.311015  471785 round_trippers.go:580]     Audit-Id: fc1e0467-08a4-41b2-9eea-8b52ab25a2bb
	I0605 17:56:22.311028  471785 round_trippers.go:580]     Cache-Control: no-cache, private
	I0605 17:56:22.311047  471785 round_trippers.go:580]     Content-Type: application/json
	I0605 17:56:22.311062  471785 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e2d1966b-30ab-44bc-ab31-0bd002eeef16
	I0605 17:56:22.311253  471785 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"511"},"items":[{"metadata":{"name":"multinode-292850","uid":"e67f260a-2ec4-4bcb-97b6-88621da3b160","resourceVersion":"402","creationTimestamp":"2023-06-05T17:54:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-292850","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b059332e570e1d712234ec4f823aa77854e7956d","minikube.k8s.io/name":"multinode-292850","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_05T17_54_49_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12332 chars]
	I0605 17:56:22.311974  471785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 17:56:22.311995  471785 node_conditions.go:123] node cpu capacity is 2
	I0605 17:56:22.312006  471785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 17:56:22.312011  471785 node_conditions.go:123] node cpu capacity is 2
	I0605 17:56:22.312027  471785 node_conditions.go:105] duration metric: took 186.470879ms to run NodePressure ...
	I0605 17:56:22.312043  471785 start.go:228] waiting for startup goroutines ...
	I0605 17:56:22.312068  471785 start.go:242] writing updated cluster config ...
	I0605 17:56:22.312406  471785 ssh_runner.go:195] Run: rm -f paused
	I0605 17:56:22.371867  471785 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0605 17:56:22.375874  471785 out.go:177] * Done! kubectl is now configured to use "multinode-292850" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 05 17:55:33 multinode-292850 crio[896]: time="2023-06-05 17:55:33.233305264Z" level=info msg="Starting container: c50195d9f54b012b850c935f9262b3cf774e815a51a295047871767bf49aacd6" id=217e8062-816b-4591-b2cd-c95ccc6a1362 name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 17:55:33 multinode-292850 crio[896]: time="2023-06-05 17:55:33.253835308Z" level=info msg="Created container 0826a066a2d8738f205372e2fac5ab3f0279067b61173d71b74f60b0172efa6a: kube-system/coredns-5d78c9869d-g9m8h/coredns" id=4a868e68-16c3-4113-9726-f25840253938 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 17:55:33 multinode-292850 crio[896]: time="2023-06-05 17:55:33.254387685Z" level=info msg="Started container" PID=1931 containerID=c50195d9f54b012b850c935f9262b3cf774e815a51a295047871767bf49aacd6 description=kube-system/storage-provisioner/storage-provisioner id=217e8062-816b-4591-b2cd-c95ccc6a1362 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a0c91418929478973b3e45e4efe594922451603a63217f61b9c27b4a1fa33ca8
	Jun 05 17:55:33 multinode-292850 crio[896]: time="2023-06-05 17:55:33.254698741Z" level=info msg="Starting container: 0826a066a2d8738f205372e2fac5ab3f0279067b61173d71b74f60b0172efa6a" id=f6e59c2b-77d5-4c67-b2bb-e4b48ee50db5 name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 17:55:33 multinode-292850 crio[896]: time="2023-06-05 17:55:33.276551351Z" level=info msg="Started container" PID=1937 containerID=0826a066a2d8738f205372e2fac5ab3f0279067b61173d71b74f60b0172efa6a description=kube-system/coredns-5d78c9869d-g9m8h/coredns id=f6e59c2b-77d5-4c67-b2bb-e4b48ee50db5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1c7b0bb5ae45b2fd10d3b55af587954411d915af36bc1f6a1ae4b47f9b24221
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.133665083Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-mtn99/POD" id=02edaddd-a61b-4267-b259-eb21d6418bcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.133729100Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.149952426Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-mtn99 Namespace:default ID:c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373 UID:274d5667-c017-4cf6-be38-ccb2e7035c8b NetNS:/var/run/netns/0b442dd2-6466-467b-9dc9-37f0272f0a31 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.149996848Z" level=info msg="Adding pod default_busybox-67b7f59bb-mtn99 to CNI network \"kindnet\" (type=ptp)"
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.162728841Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-mtn99 Namespace:default ID:c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373 UID:274d5667-c017-4cf6-be38-ccb2e7035c8b NetNS:/var/run/netns/0b442dd2-6466-467b-9dc9-37f0272f0a31 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.162917698Z" level=info msg="Checking pod default_busybox-67b7f59bb-mtn99 for CNI network kindnet (type=ptp)"
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.190195399Z" level=info msg="Ran pod sandbox c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373 with infra container: default/busybox-67b7f59bb-mtn99/POD" id=02edaddd-a61b-4267-b259-eb21d6418bcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.191318523Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=4ed6f7e0-ed2f-46ea-a75c-6372895f202c name=/runtime.v1.ImageService/ImageStatus
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.191560402Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=4ed6f7e0-ed2f-46ea-a75c-6372895f202c name=/runtime.v1.ImageService/ImageStatus
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.192497409Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=df73a5ec-c9cd-43c7-9064-50fed839331e name=/runtime.v1.ImageService/PullImage
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.193856259Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jun 05 17:56:25 multinode-292850 crio[896]: time="2023-06-05 17:56:25.854071625Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.106477930Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=df73a5ec-c9cd-43c7-9064-50fed839331e name=/runtime.v1.ImageService/PullImage
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.107760775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8afb430d-0e5e-49b7-8404-bc5e4fd538ff name=/runtime.v1.ImageService/ImageStatus
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.108929807Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8afb430d-0e5e-49b7-8404-bc5e4fd538ff name=/runtime.v1.ImageService/ImageStatus
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.110660618Z" level=info msg="Creating container: default/busybox-67b7f59bb-mtn99/busybox" id=2dcc42d0-0af7-4825-8f40-bd059a396f9a name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.110804651Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.198249806Z" level=info msg="Created container bc97257b76baa43d83d1cef529563192d27422b7c6fe8881da307bc00f11d121: default/busybox-67b7f59bb-mtn99/busybox" id=2dcc42d0-0af7-4825-8f40-bd059a396f9a name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.199155749Z" level=info msg="Starting container: bc97257b76baa43d83d1cef529563192d27422b7c6fe8881da307bc00f11d121" id=acb2000d-bcb9-4e2f-b496-e032f5be2f14 name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 17:56:27 multinode-292850 crio[896]: time="2023-06-05 17:56:27.211121966Z" level=info msg="Started container" PID=2081 containerID=bc97257b76baa43d83d1cef529563192d27422b7c6fe8881da307bc00f11d121 description=default/busybox-67b7f59bb-mtn99/busybox id=acb2000d-bcb9-4e2f-b496-e032f5be2f14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bc97257b76baa       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   4 seconds ago        Running             busybox                   0                   c551e00ad59f4       busybox-67b7f59bb-mtn99
	0826a066a2d87       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   f1c7b0bb5ae45       coredns-5d78c9869d-g9m8h
	c50195d9f54b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   a0c9141892947       storage-provisioner
	7929e531d48b6       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   87e756fcf87a7       kindnet-wm5x2
	04e908b1963c4       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0                                      About a minute ago   Running             kube-proxy                0                   bb1a63ac3ee08       kube-proxy-v8xlw
	7d205021caa0d       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840                                      About a minute ago   Running             kube-scheduler            0                   2441950d62925       kube-scheduler-multinode-292850
	e433c0dda80c2       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4                                      About a minute ago   Running             kube-controller-manager   0                   0de3272fded99       kube-controller-manager-multinode-292850
	ac15399dd51b2       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae                                      About a minute ago   Running             kube-apiserver            0                   6830f59961aac       kube-apiserver-multinode-292850
	d320e868d3481       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      About a minute ago   Running             etcd                      0                   eb0b64a836104       etcd-multinode-292850
	
	* 
	* ==> coredns [0826a066a2d8738f205372e2fac5ab3f0279067b61173d71b74f60b0172efa6a] <==
	* [INFO] 10.244.1.2:36835 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099495s
	[INFO] 10.244.0.3:49008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100283s
	[INFO] 10.244.0.3:50312 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00105256s
	[INFO] 10.244.0.3:48385 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079425s
	[INFO] 10.244.0.3:46241 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000043758s
	[INFO] 10.244.0.3:53318 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000856138s
	[INFO] 10.244.0.3:34081 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071327s
	[INFO] 10.244.0.3:39665 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058929s
	[INFO] 10.244.0.3:35271 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005449s
	[INFO] 10.244.1.2:53664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127171s
	[INFO] 10.244.1.2:35698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112114s
	[INFO] 10.244.1.2:48223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007616s
	[INFO] 10.244.1.2:60825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078031s
	[INFO] 10.244.0.3:52208 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106986s
	[INFO] 10.244.0.3:38126 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087459s
	[INFO] 10.244.0.3:33292 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082429s
	[INFO] 10.244.0.3:35656 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100988s
	[INFO] 10.244.1.2:46501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103992s
	[INFO] 10.244.1.2:60775 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133439s
	[INFO] 10.244.1.2:36005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120476s
	[INFO] 10.244.1.2:49890 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133456s
	[INFO] 10.244.0.3:55327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084612s
	[INFO] 10.244.0.3:57375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081887s
	[INFO] 10.244.0.3:39115 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005358s
	[INFO] 10.244.0.3:39326 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000047679s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-292850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-292850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=multinode-292850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T17_54_49_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 17:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-292850
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 17:56:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 17:55:32 +0000   Mon, 05 Jun 2023 17:54:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 17:55:32 +0000   Mon, 05 Jun 2023 17:54:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 17:55:32 +0000   Mon, 05 Jun 2023 17:54:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 17:55:32 +0000   Mon, 05 Jun 2023 17:55:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-292850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 56bf43e606e14a6ba2cf63d8871448f6
	  System UUID:                404bc4ac-68c5-446c-92a1-50b7cf2e53ec
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-mtn99                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5d78c9869d-g9m8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-multinode-292850                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-wm5x2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      92s
	  kube-system                 kube-apiserver-multinode-292850             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-292850    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-v8xlw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-multinode-292850             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 89s   kube-proxy       
	  Normal  Starting                 105s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s  kubelet          Node multinode-292850 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s  kubelet          Node multinode-292850 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s  kubelet          Node multinode-292850 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s   node-controller  Node multinode-292850 event: Registered Node multinode-292850 in Controller
	  Normal  NodeReady                60s   kubelet          Node multinode-292850 status is now: NodeReady
	
	
	Name:               multinode-292850-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-292850-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 17:55:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-292850-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 17:56:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 17:56:20 +0000   Mon, 05 Jun 2023 17:55:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 17:56:20 +0000   Mon, 05 Jun 2023 17:55:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 17:56:20 +0000   Mon, 05 Jun 2023 17:55:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 17:56:20 +0000   Mon, 05 Jun 2023 17:56:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-292850-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 96a48a2c01554820be4cdafd6d11b4fc
	  System UUID:                8f987937-6993-46e1-9689-bb089e044842
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-8g86r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-p8mnt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-zln7p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 45s)  kubelet          Node multinode-292850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 45s)  kubelet          Node multinode-292850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 45s)  kubelet          Node multinode-292850-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node multinode-292850-m02 event: Registered Node multinode-292850-m02 in Controller
	  Normal  NodeReady                12s                kubelet          Node multinode-292850-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001075] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +0.004539] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000e785b4d1
	[  +0.001078] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000005f019a4a
	[  +0.001044] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +3.062644] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000061ba42e8
	[  +0.001124] FS-Cache: O-key=[8] 'd0d1c90000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001040] FS-Cache: N-key=[8] 'd0d1c90000000000'
	[  +0.324591] FS-Cache: Duplicate cookie detected
	[  +0.000707] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000004b485b91
	[  +0.001042] FS-Cache: O-key=[8] 'd6d1c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000003ad92423
	[  +0.001057] FS-Cache: N-key=[8] 'd6d1c90000000000'
	
	* 
	* ==> etcd [d320e868d3481b2eda99a05c5df705e891e065f294c8b3d3be24d1037797ac63] <==
	* {"level":"info","ts":"2023-06-05T17:54:40.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-06-05T17:54:40.829Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-06-05T17:54:40.855Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-05T17:54:40.871Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T17:54:40.870Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-06-05T17:54:40.871Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-06-05T17:54:40.871Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-05T17:54:41.295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-05T17:54:41.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-05T17:54:41.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-06-05T17:54:41.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-06-05T17:54:41.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-06-05T17:54:41.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-06-05T17:54:41.296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-06-05T17:54:41.300Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-292850 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-05T17:54:41.306Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T17:54:41.307Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-06-05T17:54:41.307Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:56:32 up  2:38,  0 users,  load average: 0.91, 1.82, 2.01
	Linux multinode-292850 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [7929e531d48b66fe09010c7544d54862baacbba72b77865ceabd96d94f3556cc] <==
	* I0605 17:55:02.131620       1 main.go:116] setting mtu 1500 for CNI 
	I0605 17:55:02.131633       1 main.go:146] kindnetd IP family: "ipv4"
	I0605 17:55:02.131647       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0605 17:55:32.456438       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0605 17:55:32.471834       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0605 17:55:32.471869       1 main.go:227] handling current node
	I0605 17:55:42.487839       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0605 17:55:42.487990       1 main.go:227] handling current node
	I0605 17:55:52.500016       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0605 17:55:52.500046       1 main.go:227] handling current node
	I0605 17:55:52.500058       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0605 17:55:52.500064       1 main.go:250] Node multinode-292850-m02 has CIDR [10.244.1.0/24] 
	I0605 17:55:52.500210       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0605 17:56:02.505356       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0605 17:56:02.505392       1 main.go:227] handling current node
	I0605 17:56:02.505403       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0605 17:56:02.505409       1 main.go:250] Node multinode-292850-m02 has CIDR [10.244.1.0/24] 
	I0605 17:56:12.513812       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0605 17:56:12.513843       1 main.go:227] handling current node
	I0605 17:56:12.513854       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0605 17:56:12.513860       1 main.go:250] Node multinode-292850-m02 has CIDR [10.244.1.0/24] 
	I0605 17:56:22.525832       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0605 17:56:22.525860       1 main.go:227] handling current node
	I0605 17:56:22.525871       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0605 17:56:22.525878       1 main.go:250] Node multinode-292850-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [ac15399dd51b259210446c46102e21066721c47497e13148b4b10a3c37058b3d] <==
	* I0605 17:54:44.949126       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0605 17:54:44.949162       1 cache.go:39] Caches are synced for autoregister controller
	I0605 17:54:44.949362       1 shared_informer.go:318] Caches are synced for configmaps
	I0605 17:54:44.949511       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0605 17:54:44.949647       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0605 17:54:44.949675       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0605 17:54:44.949680       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0605 17:54:45.030856       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0605 17:54:45.193065       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0605 17:54:45.287270       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0605 17:54:45.715550       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0605 17:54:45.724537       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0605 17:54:45.724654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0605 17:54:46.305726       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0605 17:54:46.353939       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0605 17:54:46.425588       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0605 17:54:46.433779       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0605 17:54:46.434813       1 controller.go:624] quota admission added evaluator for: endpoints
	I0605 17:54:46.439339       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0605 17:54:46.981402       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0605 17:54:47.886238       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0605 17:54:47.906294       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0605 17:54:47.919242       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0605 17:55:00.867174       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0605 17:55:00.913428       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [e433c0dda80c2a0555df7ec3a1ba2e1f8d737001ae55ba1b62156d6953fc2cfa] <==
	* I0605 17:55:00.901034       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v8xlw"
	I0605 17:55:00.910638       1 shared_informer.go:318] Caches are synced for attach detach
	I0605 17:55:00.916902       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wm5x2"
	I0605 17:55:00.941487       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0605 17:55:00.948117       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 17:55:00.977431       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0605 17:55:01.021201       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-ztqhc"
	I0605 17:55:01.030202       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 17:55:01.057999       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-g9m8h"
	I0605 17:55:01.377846       1 shared_informer.go:318] Caches are synced for garbage collector
	I0605 17:55:01.411764       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0605 17:55:01.434081       1 shared_informer.go:318] Caches are synced for garbage collector
	I0605 17:55:01.434792       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0605 17:55:01.456345       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-ztqhc"
	I0605 17:55:35.848436       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0605 17:55:49.371399       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-292850-m02\" does not exist"
	I0605 17:55:49.386057       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-292850-m02" podCIDRs=[10.244.1.0/24]
	I0605 17:55:49.398015       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p8mnt"
	I0605 17:55:49.402792       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zln7p"
	I0605 17:55:50.851267       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-292850-m02"
	I0605 17:55:50.851388       1 event.go:307] "Event occurred" object="multinode-292850-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-292850-m02 event: Registered Node multinode-292850-m02 in Controller"
	W0605 17:56:20.855029       1 topologycache.go:232] Can't get CPU or zone information for multinode-292850-m02 node
	I0605 17:56:23.272695       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0605 17:56:23.295278       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-8g86r"
	I0605 17:56:23.307960       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-mtn99"
	
	* 
	* ==> kube-proxy [04e908b1963c491b2b9c33ab5ec3cb3889b22eaf823cac0fa148287fea47ffd3] <==
	* I0605 17:55:02.476328       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0605 17:55:02.476454       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0605 17:55:02.476475       1 server_others.go:551] "Using iptables proxy"
	I0605 17:55:02.535508       1 server_others.go:190] "Using iptables Proxier"
	I0605 17:55:02.535634       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0605 17:55:02.535674       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0605 17:55:02.535724       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0605 17:55:02.535829       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0605 17:55:02.536742       1 server.go:657] "Version info" version="v1.27.2"
	I0605 17:55:02.536844       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 17:55:02.541304       1 config.go:315] "Starting node config controller"
	I0605 17:55:02.541329       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0605 17:55:02.541688       1 config.go:97] "Starting endpoint slice config controller"
	I0605 17:55:02.541706       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0605 17:55:02.541785       1 config.go:188] "Starting service config controller"
	I0605 17:55:02.541799       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0605 17:55:02.641850       1 shared_informer.go:318] Caches are synced for service config
	I0605 17:55:02.641916       1 shared_informer.go:318] Caches are synced for node config
	I0605 17:55:02.641928       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7d205021caa0d7c24f850429e332d097e9920c04e9497a783c6bca37f4a4419a] <==
	* W0605 17:54:45.795986       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0605 17:54:45.796005       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0605 17:54:45.796069       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0605 17:54:45.796090       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0605 17:54:45.796153       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0605 17:54:45.796170       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0605 17:54:45.796202       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0605 17:54:45.796252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0605 17:54:45.796258       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0605 17:54:45.796346       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0605 17:54:45.796348       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0605 17:54:45.796420       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0605 17:54:45.796450       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0605 17:54:45.796426       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0605 17:54:45.796524       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0605 17:54:45.796540       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0605 17:54:45.796222       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0605 17:54:45.796556       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0605 17:54:45.796306       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0605 17:54:45.796570       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0605 17:54:45.796632       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0605 17:54:45.796684       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0605 17:54:45.796934       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0605 17:54:45.797001       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0605 17:54:47.086507       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 05 17:55:01 multinode-292850 kubelet[1385]: I0605 17:55:01.089000    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b11f9e66-fb00-4b48-98cf-113fa1163e85-kube-proxy\") pod \"kube-proxy-v8xlw\" (UID: \"b11f9e66-fb00-4b48-98cf-113fa1163e85\") " pod="kube-system/kube-proxy-v8xlw"
	Jun 05 17:55:01 multinode-292850 kubelet[1385]: I0605 17:55:01.089029    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b11f9e66-fb00-4b48-98cf-113fa1163e85-xtables-lock\") pod \"kube-proxy-v8xlw\" (UID: \"b11f9e66-fb00-4b48-98cf-113fa1163e85\") " pod="kube-system/kube-proxy-v8xlw"
	Jun 05 17:55:01 multinode-292850 kubelet[1385]: W0605 17:55:01.554016    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-bb1a63ac3ee08776b96834fc200577eb438f7cc878d8de073e2d42b233b85ced WatchSource:0}: Error finding container bb1a63ac3ee08776b96834fc200577eb438f7cc878d8de073e2d42b233b85ced: Status 404 returned error can't find the container with id bb1a63ac3ee08776b96834fc200577eb438f7cc878d8de073e2d42b233b85ced
	Jun 05 17:55:01 multinode-292850 kubelet[1385]: W0605 17:55:01.589598    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-87e756fcf87a77cb26bf2174aab578e3cbc3d5b4fe5e78e5dcfe1225f0132139 WatchSource:0}: Error finding container 87e756fcf87a77cb26bf2174aab578e3cbc3d5b4fe5e78e5dcfe1225f0132139: Status 404 returned error can't find the container with id 87e756fcf87a77cb26bf2174aab578e3cbc3d5b4fe5e78e5dcfe1225f0132139
	Jun 05 17:55:03 multinode-292850 kubelet[1385]: I0605 17:55:03.189416    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-wm5x2" podStartSLOduration=3.18937194 podCreationTimestamp="2023-06-05 17:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-05 17:55:02.227524348 +0000 UTC m=+14.392411877" watchObservedRunningTime="2023-06-05 17:55:03.18937194 +0000 UTC m=+15.354259436"
	Jun 05 17:55:08 multinode-292850 kubelet[1385]: I0605 17:55:08.017901    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v8xlw" podStartSLOduration=8.01785635 podCreationTimestamp="2023-06-05 17:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-05 17:55:03.189756062 +0000 UTC m=+15.354643911" watchObservedRunningTime="2023-06-05 17:55:08.01785635 +0000 UTC m=+20.182743854"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.675671    1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.760447    1385 topology_manager.go:212] "Topology Admit Handler"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.763492    1385 topology_manager.go:212] "Topology Admit Handler"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.822082    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de5aab07-b3ba-4a99-8384-9958e4f604b3-config-volume\") pod \"coredns-5d78c9869d-g9m8h\" (UID: \"de5aab07-b3ba-4a99-8384-9958e4f604b3\") " pod="kube-system/coredns-5d78c9869d-g9m8h"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.822136    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gtr\" (UniqueName: \"kubernetes.io/projected/de5aab07-b3ba-4a99-8384-9958e4f604b3-kube-api-access-69gtr\") pod \"coredns-5d78c9869d-g9m8h\" (UID: \"de5aab07-b3ba-4a99-8384-9958e4f604b3\") " pod="kube-system/coredns-5d78c9869d-g9m8h"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.822169    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68rv\" (UniqueName: \"kubernetes.io/projected/4675df19-daf8-44d2-992e-6f6be51be7da-kube-api-access-g68rv\") pod \"storage-provisioner\" (UID: \"4675df19-daf8-44d2-992e-6f6be51be7da\") " pod="kube-system/storage-provisioner"
	Jun 05 17:55:32 multinode-292850 kubelet[1385]: I0605 17:55:32.822194    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4675df19-daf8-44d2-992e-6f6be51be7da-tmp\") pod \"storage-provisioner\" (UID: \"4675df19-daf8-44d2-992e-6f6be51be7da\") " pod="kube-system/storage-provisioner"
	Jun 05 17:55:33 multinode-292850 kubelet[1385]: W0605 17:55:33.102350    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-f1c7b0bb5ae45b2fd10d3b55af587954411d915af36bc1f6a1ae4b47f9b24221 WatchSource:0}: Error finding container f1c7b0bb5ae45b2fd10d3b55af587954411d915af36bc1f6a1ae4b47f9b24221: Status 404 returned error can't find the container with id f1c7b0bb5ae45b2fd10d3b55af587954411d915af36bc1f6a1ae4b47f9b24221
	Jun 05 17:55:33 multinode-292850 kubelet[1385]: W0605 17:55:33.102711    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-a0c91418929478973b3e45e4efe594922451603a63217f61b9c27b4a1fa33ca8 WatchSource:0}: Error finding container a0c91418929478973b3e45e4efe594922451603a63217f61b9c27b4a1fa33ca8: Status 404 returned error can't find the container with id a0c91418929478973b3e45e4efe594922451603a63217f61b9c27b4a1fa33ca8
	Jun 05 17:55:34 multinode-292850 kubelet[1385]: I0605 17:55:34.257552    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-g9m8h" podStartSLOduration=33.257510907 podCreationTimestamp="2023-06-05 17:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-05 17:55:34.240940012 +0000 UTC m=+46.405827516" watchObservedRunningTime="2023-06-05 17:55:34.257510907 +0000 UTC m=+46.422398411"
	Jun 05 17:55:34 multinode-292850 kubelet[1385]: I0605 17:55:34.281155    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.281105283 podCreationTimestamp="2023-06-05 17:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-05 17:55:34.281050055 +0000 UTC m=+46.445937575" watchObservedRunningTime="2023-06-05 17:55:34.281105283 +0000 UTC m=+46.445992787"
	Jun 05 17:56:23 multinode-292850 kubelet[1385]: I0605 17:56:23.332292    1385 topology_manager.go:212] "Topology Admit Handler"
	Jun 05 17:56:23 multinode-292850 kubelet[1385]: W0605 17:56:23.348316    1385 reflector.go:533] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-292850" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-292850' and this object
	Jun 05 17:56:23 multinode-292850 kubelet[1385]: E0605 17:56:23.348361    1385 reflector.go:148] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-292850" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-292850' and this object
	Jun 05 17:56:23 multinode-292850 kubelet[1385]: I0605 17:56:23.362884    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxbb\" (UniqueName: \"kubernetes.io/projected/274d5667-c017-4cf6-be38-ccb2e7035c8b-kube-api-access-rhxbb\") pod \"busybox-67b7f59bb-mtn99\" (UID: \"274d5667-c017-4cf6-be38-ccb2e7035c8b\") " pod="default/busybox-67b7f59bb-mtn99"
	Jun 05 17:56:24 multinode-292850 kubelet[1385]: E0605 17:56:24.474402    1385 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jun 05 17:56:24 multinode-292850 kubelet[1385]: E0605 17:56:24.474446    1385 projected.go:198] Error preparing data for projected volume kube-api-access-rhxbb for pod default/busybox-67b7f59bb-mtn99: failed to sync configmap cache: timed out waiting for the condition
	Jun 05 17:56:24 multinode-292850 kubelet[1385]: E0605 17:56:24.474533    1385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/274d5667-c017-4cf6-be38-ccb2e7035c8b-kube-api-access-rhxbb podName:274d5667-c017-4cf6-be38-ccb2e7035c8b nodeName:}" failed. No retries permitted until 2023-06-05 17:56:24.974510023 +0000 UTC m=+97.139397519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rhxbb" (UniqueName: "kubernetes.io/projected/274d5667-c017-4cf6-be38-ccb2e7035c8b-kube-api-access-rhxbb") pod "busybox-67b7f59bb-mtn99" (UID: "274d5667-c017-4cf6-be38-ccb2e7035c8b") : failed to sync configmap cache: timed out waiting for the condition
	Jun 05 17:56:25 multinode-292850 kubelet[1385]: W0605 17:56:25.164732    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373 WatchSource:0}: Error finding container c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373: Status 404 returned error can't find the container with id c551e00ad59f40a92f0df308dea581e2a14bfcc9f57a1ac2a4ed127fe37c0373
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-292850 -n multinode-292850
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-292850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.73s)

                                                
                                    
x
+
TestPreload (183s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-802397 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0605 18:03:12.209143  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 18:03:47.235654  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-802397 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.000318849s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 ssh -p test-preload-802397 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 ssh -p test-preload-802397 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.274028315s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-802397
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-802397: (5.883672492s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-802397 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0605 18:04:50.697608  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-802397 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m19.557547867s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-arm64 ssh -p test-preload-802397 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-06-05 18:05:29.659942534 +0000 UTC m=+2097.475949528
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-802397
helpers_test.go:235: (dbg) docker inspect test-preload-802397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc",
	        "Created": "2023-06-05T18:02:33.533360532Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 499937,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T18:04:19.461543744Z",
	            "FinishedAt": "2023-06-05T18:04:09.502652215Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/hosts",
	        "LogPath": "/var/lib/docker/containers/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc-json.log",
	        "Name": "/test-preload-802397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "test-preload-802397:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-802397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e74466affc4c96fa8dfc64ca5b359bc3791e3fea3609aef3aae06e1363ab0300-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e74466affc4c96fa8dfc64ca5b359bc3791e3fea3609aef3aae06e1363ab0300/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e74466affc4c96fa8dfc64ca5b359bc3791e3fea3609aef3aae06e1363ab0300/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e74466affc4c96fa8dfc64ca5b359bc3791e3fea3609aef3aae06e1363ab0300/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "test-preload-802397",
	                "Source": "/var/lib/docker/volumes/test-preload-802397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-802397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-802397",
	                "name.minikube.sigs.k8s.io": "test-preload-802397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0d31ac4fd8cdde12cbb3f5ed08c6cea8a044c9a1e81c27438dfa59e2b0b083b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33243"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33239"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33240"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f0d31ac4fd8c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-802397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b9cc6c9c5f6",
	                        "test-preload-802397"
	                    ],
	                    "NetworkID": "3cbeb95b64d03d3aeb4e51d4e2eac391402a7bc60ed9a5acc97be0afb94851df",
	                    "EndpointID": "49d419f26b6897d9992a2680d1733d3e5a365e29e1a567cd660e70217ba76a16",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p test-preload-802397 -n test-preload-802397
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-802397 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p test-preload-802397 logs -n 25: (1.522857126s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-292850 ssh -n                                                                 | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	|         | multinode-292850-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-292850 ssh -n multinode-292850 sudo cat                                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	|         | /home/docker/cp-test_multinode-292850-m03_multinode-292850.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-292850 cp multinode-292850-m03:/home/docker/cp-test.txt                       | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	|         | multinode-292850-m02:/home/docker/cp-test_multinode-292850-m03_multinode-292850-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-292850 ssh -n                                                                 | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	|         | multinode-292850-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-292850 ssh -n multinode-292850-m02 sudo cat                                   | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	|         | /home/docker/cp-test_multinode-292850-m03_multinode-292850-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-292850 node stop m03                                                          | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	| node    | multinode-292850 node start                                                             | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:57 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-292850                                                                | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC |                     |
	| stop    | -p multinode-292850                                                                     | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:57 UTC | 05 Jun 23 17:58 UTC |
	| start   | -p multinode-292850                                                                     | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:58 UTC | 05 Jun 23 17:59 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-292850                                                                | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:59 UTC |                     |
	| node    | multinode-292850 node delete                                                            | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:59 UTC | 05 Jun 23 17:59 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-292850 stop                                                                   | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 17:59 UTC | 05 Jun 23 18:00 UTC |
	| start   | -p multinode-292850                                                                     | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 18:00 UTC | 05 Jun 23 18:01 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-292850                                                                | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 18:01 UTC |                     |
	| start   | -p multinode-292850-m02                                                                 | multinode-292850-m02 | jenkins | v1.30.1 | 05 Jun 23 18:01 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-292850-m03                                                                 | multinode-292850-m03 | jenkins | v1.30.1 | 05 Jun 23 18:01 UTC | 05 Jun 23 18:02 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-292850                                                                 | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 18:02 UTC |                     |
	| delete  | -p multinode-292850-m03                                                                 | multinode-292850-m03 | jenkins | v1.30.1 | 05 Jun 23 18:02 UTC | 05 Jun 23 18:02 UTC |
	| delete  | -p multinode-292850                                                                     | multinode-292850     | jenkins | v1.30.1 | 05 Jun 23 18:02 UTC | 05 Jun 23 18:02 UTC |
	| start   | -p test-preload-802397                                                                  | test-preload-802397  | jenkins | v1.30.1 | 05 Jun 23 18:02 UTC | 05 Jun 23 18:04 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-802397                                                                  | test-preload-802397  | jenkins | v1.30.1 | 05 Jun 23 18:04 UTC | 05 Jun 23 18:04 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-802397                                                                  | test-preload-802397  | jenkins | v1.30.1 | 05 Jun 23 18:04 UTC | 05 Jun 23 18:04 UTC |
	| start   | -p test-preload-802397                                                                  | test-preload-802397  | jenkins | v1.30.1 | 05 Jun 23 18:04 UTC | 05 Jun 23 18:05 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| ssh     | -p test-preload-802397 -- sudo                                                          | test-preload-802397  | jenkins | v1.30.1 | 05 Jun 23 18:05 UTC | 05 Jun 23 18:05 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 18:04:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 18:04:09.806427  499748 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:04:09.806550  499748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:04:09.806559  499748 out.go:309] Setting ErrFile to fd 2...
	I0605 18:04:09.806565  499748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:04:09.806720  499748 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:04:09.807139  499748 out.go:303] Setting JSON to false
	I0605 18:04:09.808179  499748 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":9982,"bootTime":1685978268,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:04:09.808250  499748 start.go:137] virtualization:  
	I0605 18:04:09.811447  499748 out.go:177] * [test-preload-802397] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:04:09.814395  499748 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:04:09.817074  499748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:04:09.814529  499748 notify.go:220] Checking for updates...
	I0605 18:04:09.822307  499748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:04:09.824937  499748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:04:09.827422  499748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:04:09.830080  499748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:04:09.833271  499748 config.go:182] Loaded profile config "test-preload-802397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0605 18:04:09.836243  499748 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0605 18:04:09.838578  499748 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:04:09.865270  499748 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:04:09.865367  499748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:04:09.959075  499748 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-06-05 18:04:09.949121879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:04:09.959209  499748 docker.go:294] overlay module found
	I0605 18:04:09.961889  499748 out.go:177] * Using the docker driver based on existing profile
	I0605 18:04:09.965238  499748 start.go:297] selected driver: docker
	I0605 18:04:09.965257  499748 start.go:875] validating driver "docker" against &{Name:test-preload-802397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-802397 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:04:09.965367  499748 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:04:09.965987  499748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:04:10.048039  499748 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-06-05 18:04:10.037432533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:04:10.048399  499748 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0605 18:04:10.048430  499748 cni.go:84] Creating CNI manager for ""
	I0605 18:04:10.048439  499748 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:04:10.048474  499748 start_flags.go:319] config:
	{Name:test-preload-802397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-802397 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:04:10.051389  499748 out.go:177] * Starting control plane node test-preload-802397 in cluster test-preload-802397
	I0605 18:04:10.054503  499748 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 18:04:10.057006  499748 out.go:177] * Pulling base image ...
	I0605 18:04:10.059769  499748 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0605 18:04:10.059833  499748 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 18:04:10.077697  499748 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 18:04:10.077719  499748 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 18:04:10.133064  499748 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4
	I0605 18:04:10.133089  499748 cache.go:57] Caching tarball of preloaded images
	I0605 18:04:10.133254  499748 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0605 18:04:10.135622  499748 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0605 18:04:10.137924  499748 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4 ...
	I0605 18:04:10.262881  499748 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:d2db394df12e407c28bb66857d0d812b -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4
	I0605 18:04:18.225296  499748 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4 ...
	I0605 18:04:18.225407  499748 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-arm64.tar.lz4 ...
	I0605 18:04:19.097596  499748 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0605 18:04:19.097752  499748 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/config.json ...
	I0605 18:04:19.097997  499748 cache.go:195] Successfully downloaded all kic artifacts
	I0605 18:04:19.098034  499748 start.go:364] acquiring machines lock for test-preload-802397: {Name:mk2364a5c3dfa289259e4793f41b433f9915e10a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:04:19.098115  499748 start.go:368] acquired machines lock for "test-preload-802397" in 45.448µs
	I0605 18:04:19.098131  499748 start.go:96] Skipping create...Using existing machine configuration
	I0605 18:04:19.098137  499748 fix.go:55] fixHost starting: 
	I0605 18:04:19.098403  499748 cli_runner.go:164] Run: docker container inspect test-preload-802397 --format={{.State.Status}}
	I0605 18:04:19.121089  499748 fix.go:103] recreateIfNeeded on test-preload-802397: state=Stopped err=<nil>
	W0605 18:04:19.121121  499748 fix.go:129] unexpected machine state, will restart: <nil>
	I0605 18:04:19.124348  499748 out.go:177] * Restarting existing docker container for "test-preload-802397" ...
	I0605 18:04:19.127149  499748 cli_runner.go:164] Run: docker start test-preload-802397
	I0605 18:04:19.469951  499748 cli_runner.go:164] Run: docker container inspect test-preload-802397 --format={{.State.Status}}
	I0605 18:04:19.490747  499748 kic.go:426] container "test-preload-802397" state is running.
	I0605 18:04:19.491216  499748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-802397
	I0605 18:04:19.517674  499748 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/config.json ...
	I0605 18:04:19.517912  499748 machine.go:88] provisioning docker machine ...
	I0605 18:04:19.517929  499748 ubuntu.go:169] provisioning hostname "test-preload-802397"
	I0605 18:04:19.517979  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:19.545740  499748 main.go:141] libmachine: Using SSH client type: native
	I0605 18:04:19.546189  499748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33243 <nil> <nil>}
	I0605 18:04:19.546202  499748 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-802397 && echo "test-preload-802397" | sudo tee /etc/hostname
	I0605 18:04:19.546915  499748 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47542->127.0.0.1:33243: read: connection reset by peer
	I0605 18:04:22.702901  499748 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-802397
	
	I0605 18:04:22.702982  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:22.721392  499748 main.go:141] libmachine: Using SSH client type: native
	I0605 18:04:22.721837  499748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33243 <nil> <nil>}
	I0605 18:04:22.721862  499748 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-802397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-802397/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-802397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 18:04:22.861319  499748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 18:04:22.861347  499748 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 18:04:22.861370  499748 ubuntu.go:177] setting up certificates
	I0605 18:04:22.861379  499748 provision.go:83] configureAuth start
	I0605 18:04:22.861441  499748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-802397
	I0605 18:04:22.880592  499748 provision.go:138] copyHostCerts
	I0605 18:04:22.880683  499748 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 18:04:22.880708  499748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 18:04:22.880789  499748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 18:04:22.880928  499748 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 18:04:22.880938  499748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 18:04:22.880966  499748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 18:04:22.881024  499748 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 18:04:22.881033  499748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 18:04:22.881058  499748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 18:04:22.881104  499748 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.test-preload-802397 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-802397]
	I0605 18:04:23.223714  499748 provision.go:172] copyRemoteCerts
	I0605 18:04:23.223790  499748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 18:04:23.223838  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:23.242950  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:04:23.348306  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 18:04:23.381026  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0605 18:04:23.412253  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 18:04:23.442475  499748 provision.go:86] duration metric: configureAuth took 581.082042ms
	I0605 18:04:23.442504  499748 ubuntu.go:193] setting minikube options for container-runtime
	I0605 18:04:23.442733  499748 config.go:182] Loaded profile config "test-preload-802397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0605 18:04:23.442889  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:23.462405  499748 main.go:141] libmachine: Using SSH client type: native
	I0605 18:04:23.462851  499748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33243 <nil> <nil>}
	I0605 18:04:23.462873  499748 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 18:04:23.823088  499748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 18:04:23.823119  499748 machine.go:91] provisioned docker machine in 4.305198036s
	I0605 18:04:23.823136  499748 start.go:300] post-start starting for "test-preload-802397" (driver="docker")
	I0605 18:04:23.823143  499748 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 18:04:23.823210  499748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 18:04:23.823256  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:23.851410  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:04:23.955372  499748 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 18:04:23.959729  499748 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 18:04:23.959767  499748 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 18:04:23.959780  499748 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 18:04:23.959787  499748 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 18:04:23.959797  499748 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 18:04:23.959863  499748 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 18:04:23.960001  499748 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 18:04:23.960119  499748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 18:04:23.970980  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:04:24.007203  499748 start.go:303] post-start completed in 184.047811ms
	I0605 18:04:24.007322  499748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:04:24.007390  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:24.029980  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:04:24.127053  499748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 18:04:24.133281  499748 fix.go:57] fixHost completed within 5.035129028s
	I0605 18:04:24.133307  499748 start.go:83] releasing machines lock for "test-preload-802397", held for 5.035182057s
	I0605 18:04:24.133382  499748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-802397
	I0605 18:04:24.151714  499748 ssh_runner.go:195] Run: cat /version.json
	I0605 18:04:24.151725  499748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 18:04:24.151768  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:24.151794  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:04:24.171626  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:04:24.198774  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:04:24.272554  499748 ssh_runner.go:195] Run: systemctl --version
	I0605 18:04:24.413192  499748 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 18:04:24.567727  499748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 18:04:24.573709  499748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:04:24.585153  499748 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 18:04:24.585234  499748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:04:24.597125  499748 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0605 18:04:24.597150  499748 start.go:481] detecting cgroup driver to use...
	I0605 18:04:24.597185  499748 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 18:04:24.597240  499748 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 18:04:24.612102  499748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 18:04:24.626428  499748 docker.go:193] disabling cri-docker service (if available) ...
	I0605 18:04:24.626494  499748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 18:04:24.642410  499748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 18:04:24.657030  499748 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 18:04:24.749835  499748 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 18:04:24.857933  499748 docker.go:209] disabling docker service ...
	I0605 18:04:24.858047  499748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 18:04:24.874518  499748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 18:04:24.888945  499748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 18:04:24.998256  499748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 18:04:25.105111  499748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 18:04:25.120233  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:04:25.141441  499748 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0605 18:04:25.141509  499748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:04:25.155229  499748 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 18:04:25.155308  499748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:04:25.169366  499748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:04:25.182438  499748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:04:25.196150  499748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 18:04:25.207885  499748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 18:04:25.219298  499748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 18:04:25.230066  499748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:04:25.326082  499748 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 18:04:25.459251  499748 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 18:04:25.459319  499748 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 18:04:25.463914  499748 start.go:549] Will wait 60s for crictl version
	I0605 18:04:25.464047  499748 ssh_runner.go:195] Run: which crictl
	I0605 18:04:25.468990  499748 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 18:04:25.518162  499748 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 18:04:25.518254  499748 ssh_runner.go:195] Run: crio --version
	I0605 18:04:25.570232  499748 ssh_runner.go:195] Run: crio --version
	I0605 18:04:25.619362  499748 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.5 ...
	I0605 18:04:25.621480  499748 cli_runner.go:164] Run: docker network inspect test-preload-802397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 18:04:25.639317  499748 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0605 18:04:25.643992  499748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 18:04:25.658030  499748 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0605 18:04:25.658102  499748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 18:04:25.708221  499748 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 18:04:25.708243  499748 crio.go:415] Images already preloaded, skipping extraction
	I0605 18:04:25.708299  499748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 18:04:25.751340  499748 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 18:04:25.751363  499748 cache_images.go:84] Images are preloaded, skipping loading
	I0605 18:04:25.751450  499748 ssh_runner.go:195] Run: crio config
	I0605 18:04:25.835403  499748 cni.go:84] Creating CNI manager for ""
	I0605 18:04:25.835428  499748 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:04:25.835441  499748 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 18:04:25.835485  499748 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-802397 NodeName:test-preload-802397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 18:04:25.835667  499748 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-802397"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 18:04:25.835762  499748 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-802397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-802397 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 18:04:25.835876  499748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0605 18:04:25.846940  499748 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 18:04:25.847028  499748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 18:04:25.857992  499748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0605 18:04:25.879286  499748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 18:04:25.900610  499748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0605 18:04:25.922821  499748 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0605 18:04:25.927770  499748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0605 18:04:25.941619  499748 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397 for IP: 192.168.67.2
	I0605 18:04:25.941649  499748 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:04:25.941785  499748 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 18:04:25.941823  499748 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 18:04:25.941896  499748 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.key
	I0605 18:04:25.941956  499748 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/apiserver.key.c7fa3a9e
	I0605 18:04:25.941994  499748 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/proxy-client.key
	I0605 18:04:25.942111  499748 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem (1338 bytes)
	W0605 18:04:25.942140  499748 certs.go:433] ignoring /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813_empty.pem, impossibly tiny 0 bytes
	I0605 18:04:25.942149  499748 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 18:04:25.942174  499748 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 18:04:25.942197  499748 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 18:04:25.942238  499748 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 18:04:25.942317  499748 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:04:25.942949  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 18:04:25.973231  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0605 18:04:26.002907  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 18:04:26.033666  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0605 18:04:26.063390  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 18:04:26.094513  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 18:04:26.123737  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 18:04:26.153560  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 18:04:26.182918  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 18:04:26.212319  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem --> /usr/share/ca-certificates/407813.pem (1338 bytes)
	I0605 18:04:26.241696  499748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /usr/share/ca-certificates/4078132.pem (1708 bytes)
	I0605 18:04:26.270941  499748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 18:04:26.292240  499748 ssh_runner.go:195] Run: openssl version
	I0605 18:04:26.299474  499748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 18:04:26.312014  499748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:04:26.316797  499748 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:04:26.316863  499748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:04:26.325671  499748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 18:04:26.337204  499748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407813.pem && ln -fs /usr/share/ca-certificates/407813.pem /etc/ssl/certs/407813.pem"
	I0605 18:04:26.349678  499748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407813.pem
	I0605 18:04:26.354590  499748 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 18:04:26.354703  499748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407813.pem
	I0605 18:04:26.363570  499748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407813.pem /etc/ssl/certs/51391683.0"
	I0605 18:04:26.375242  499748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4078132.pem && ln -fs /usr/share/ca-certificates/4078132.pem /etc/ssl/certs/4078132.pem"
	I0605 18:04:26.387636  499748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4078132.pem
	I0605 18:04:26.392523  499748 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 18:04:26.392638  499748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4078132.pem
	I0605 18:04:26.401719  499748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4078132.pem /etc/ssl/certs/3ec20f2e.0"
	I0605 18:04:26.413005  499748 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 18:04:26.417801  499748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0605 18:04:26.426610  499748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0605 18:04:26.435463  499748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0605 18:04:26.444176  499748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0605 18:04:26.453231  499748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0605 18:04:26.461958  499748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0605 18:04:26.470755  499748 kubeadm.go:404] StartCluster: {Name:test-preload-802397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-802397 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:04:26.470916  499748 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 18:04:26.471022  499748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 18:04:26.515463  499748 cri.go:88] found id: ""
	I0605 18:04:26.515594  499748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0605 18:04:26.526599  499748 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0605 18:04:26.526621  499748 kubeadm.go:636] restartCluster start
	I0605 18:04:26.526681  499748 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0605 18:04:26.537510  499748 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:26.537996  499748 kubeconfig.go:135] verify returned: extract IP: "test-preload-802397" does not appear in /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:04:26.538107  499748 kubeconfig.go:146] "test-preload-802397" context is missing from /home/jenkins/minikube-integration/16634-402421/kubeconfig - will repair!
	I0605 18:04:26.538378  499748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/kubeconfig: {Name:mkb77de9bf1ac5a664886fbfefd28a762472c016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:04:26.539049  499748 kapi.go:59] client config for test-preload-802397: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 18:04:26.540079  499748 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0605 18:04:26.551784  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:26.551887  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:26.564666  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:27.065647  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:27.065763  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:27.079871  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:27.565637  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:27.565760  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:27.578125  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:28.065729  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:28.065877  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:28.078855  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:28.564906  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:28.565039  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:28.577678  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:29.064887  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:29.065059  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:29.077832  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:29.565059  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:29.565164  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:29.577829  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:30.064816  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:30.064940  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:30.080863  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:30.565559  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:30.565664  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:30.580058  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:31.065747  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:31.065888  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:31.079164  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:31.565888  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:31.566022  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:31.579236  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:32.065792  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:32.065880  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:32.079169  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:32.565668  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:32.565769  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:32.578319  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:33.064912  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:33.065026  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:33.082355  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:33.564911  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:33.565004  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:33.578487  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:34.064875  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:34.064986  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:34.077667  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:34.565241  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:34.565369  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:34.578006  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:35.065769  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:35.065858  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:35.080255  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:35.564866  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:35.564993  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:35.579531  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:36.064928  499748 api_server.go:166] Checking apiserver status ...
	I0605 18:04:36.065045  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0605 18:04:36.078305  499748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:36.552064  499748 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0605 18:04:36.552096  499748 kubeadm.go:1123] stopping kube-system containers ...
	I0605 18:04:36.552140  499748 cri.go:53] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0605 18:04:36.552223  499748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 18:04:36.597952  499748 cri.go:88] found id: ""
	I0605 18:04:36.598021  499748 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0605 18:04:36.613216  499748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0605 18:04:36.625022  499748 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  5 18:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  5 18:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Jun  5 18:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  5 18:03 /etc/kubernetes/scheduler.conf
	
	I0605 18:04:36.625094  499748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0605 18:04:36.636595  499748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0605 18:04:36.647895  499748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0605 18:04:36.659215  499748 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:36.659294  499748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0605 18:04:36.670646  499748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0605 18:04:36.682081  499748 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0605 18:04:36.682153  499748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0605 18:04:36.694764  499748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0605 18:04:36.706343  499748 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0605 18:04:36.706370  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0605 18:04:36.770892  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0605 18:04:39.871575  499748 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.100627367s)
	I0605 18:04:39.871607  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0605 18:04:40.126634  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0605 18:04:40.220566  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0605 18:04:40.340419  499748 api_server.go:52] waiting for apiserver process to appear ...
	I0605 18:04:40.340489  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:04:40.871808  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:04:41.371738  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:04:41.390778  499748 api_server.go:72] duration metric: took 1.05035819s to wait for apiserver process to appear ...
	I0605 18:04:41.390803  499748 api_server.go:88] waiting for apiserver healthz status ...
	I0605 18:04:41.390826  499748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:04:46.391822  499748 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0605 18:04:46.892670  499748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:04:46.945994  499748 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0605 18:04:46.946022  499748 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0605 18:04:47.392126  499748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:04:47.402308  499748 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0605 18:04:47.402341  499748 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0605 18:04:47.892442  499748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:04:47.905165  499748 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0605 18:04:47.905200  499748 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0605 18:04:48.392812  499748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:04:48.406243  499748 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0605 18:04:48.422718  499748 api_server.go:141] control plane version: v1.24.4
	I0605 18:04:48.422750  499748 api_server.go:131] duration metric: took 7.031939558s to wait for apiserver health ...
	I0605 18:04:48.422761  499748 cni.go:84] Creating CNI manager for ""
	I0605 18:04:48.422768  499748 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:04:48.425616  499748 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0605 18:04:48.428059  499748 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0605 18:04:48.437733  499748 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.24.4/kubectl ...
	I0605 18:04:48.437756  499748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0605 18:04:48.463013  499748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0605 18:04:49.491791  499748 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.028741078s)
	I0605 18:04:49.491824  499748 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 18:04:49.501233  499748 system_pods.go:59] 8 kube-system pods found
	I0605 18:04:49.501276  499748 system_pods.go:61] "coredns-6d4b75cb6d-w5w69" [d4ef6772-0830-445b-a40a-bc3b1fff8819] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0605 18:04:49.501310  499748 system_pods.go:61] "etcd-test-preload-802397" [6d35fd61-11b3-4f8e-83ab-536d3925c17c] Running
	I0605 18:04:49.501331  499748 system_pods.go:61] "kindnet-27r2t" [49d7f7e1-2685-4def-875d-18f00939cf48] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0605 18:04:49.501344  499748 system_pods.go:61] "kube-apiserver-test-preload-802397" [c66560c4-f905-4f0b-83f8-c0ecbcda124e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0605 18:04:49.501351  499748 system_pods.go:61] "kube-controller-manager-test-preload-802397" [819503fe-6e57-416c-9e66-5fdd3e0320a3] Running
	I0605 18:04:49.501360  499748 system_pods.go:61] "kube-proxy-f9rwh" [7742d5e3-e571-4423-96dc-2ffdcd149632] Running
	I0605 18:04:49.501386  499748 system_pods.go:61] "kube-scheduler-test-preload-802397" [33364c2d-66ea-4f36-9909-1c3b08e27877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0605 18:04:49.501400  499748 system_pods.go:61] "storage-provisioner" [162c770a-63de-407b-a4a2-708208a41226] Running
	I0605 18:04:49.501426  499748 system_pods.go:74] duration metric: took 9.594461ms to wait for pod list to return data ...
	I0605 18:04:49.501434  499748 node_conditions.go:102] verifying NodePressure condition ...
	I0605 18:04:49.504987  499748 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 18:04:49.505024  499748 node_conditions.go:123] node cpu capacity is 2
	I0605 18:04:49.505036  499748 node_conditions.go:105] duration metric: took 3.591485ms to run NodePressure ...
	I0605 18:04:49.505058  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0605 18:04:49.703544  499748 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0605 18:04:49.708386  499748 kubeadm.go:787] kubelet initialised
	I0605 18:04:49.708415  499748 kubeadm.go:788] duration metric: took 4.847876ms waiting for restarted kubelet to initialise ...
	I0605 18:04:49.708425  499748 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 18:04:49.714762  499748 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace to be "Ready" ...
	I0605 18:04:51.727220  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:04:53.727337  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:04:55.727509  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:04:58.227809  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:00.230408  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:02.727705  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:05.226961  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:07.228435  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:09.233201  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:11.727018  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:14.227431  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:16.727673  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:19.227717  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:21.726932  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:23.727708  499748 pod_ready.go:102] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"False"
	I0605 18:05:25.229369  499748 pod_ready.go:92] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:25.229395  499748 pod_ready.go:81] duration metric: took 35.514470396s waiting for pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.229406  499748 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.234693  499748 pod_ready.go:92] pod "etcd-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:25.234722  499748 pod_ready.go:81] duration metric: took 5.308413ms waiting for pod "etcd-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.234738  499748 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.240193  499748 pod_ready.go:92] pod "kube-apiserver-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:25.240273  499748 pod_ready.go:81] duration metric: took 5.525659ms waiting for pod "kube-apiserver-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.240304  499748 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.245497  499748 pod_ready.go:92] pod "kube-controller-manager-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:25.245521  499748 pod_ready.go:81] duration metric: took 5.201542ms waiting for pod "kube-controller-manager-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.245533  499748 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f9rwh" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.250973  499748 pod_ready.go:92] pod "kube-proxy-f9rwh" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:25.250999  499748 pod_ready.go:81] duration metric: took 5.458755ms waiting for pod "kube-proxy-f9rwh" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.251011  499748 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.624702  499748 pod_ready.go:92] pod "kube-scheduler-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:25.624733  499748 pod_ready.go:81] duration metric: took 373.711239ms waiting for pod "kube-scheduler-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:25.624748  499748 pod_ready.go:38] duration metric: took 35.916312726s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 18:05:25.624764  499748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0605 18:05:25.633983  499748 ops.go:34] apiserver oom_adj: -16
	I0605 18:05:25.634005  499748 kubeadm.go:640] restartCluster took 59.107377502s
	I0605 18:05:25.634015  499748 kubeadm.go:406] StartCluster complete in 59.163271105s
	I0605 18:05:25.634038  499748 settings.go:142] acquiring lock: {Name:mk7ddedb44759cc39266e9c612309013659bd7a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:05:25.634137  499748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:05:25.634827  499748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16634-402421/kubeconfig: {Name:mkb77de9bf1ac5a664886fbfefd28a762472c016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:05:25.635044  499748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0605 18:05:25.635491  499748 config.go:182] Loaded profile config "test-preload-802397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0605 18:05:25.635597  499748 kapi.go:59] client config for test-preload-802397: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 18:05:25.635621  499748 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0605 18:05:25.635697  499748 addons.go:66] Setting storage-provisioner=true in profile "test-preload-802397"
	I0605 18:05:25.635712  499748 addons.go:228] Setting addon storage-provisioner=true in "test-preload-802397"
	W0605 18:05:25.635718  499748 addons.go:237] addon storage-provisioner should already be in state true
	I0605 18:05:25.635757  499748 host.go:66] Checking if "test-preload-802397" exists ...
	I0605 18:05:25.636055  499748 addons.go:66] Setting default-storageclass=true in profile "test-preload-802397"
	I0605 18:05:25.636077  499748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-802397"
	I0605 18:05:25.636311  499748 cli_runner.go:164] Run: docker container inspect test-preload-802397 --format={{.State.Status}}
	I0605 18:05:25.636359  499748 cli_runner.go:164] Run: docker container inspect test-preload-802397 --format={{.State.Status}}
	I0605 18:05:25.644380  499748 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-802397" context rescaled to 1 replicas
	I0605 18:05:25.644418  499748 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0605 18:05:25.647030  499748 out.go:177] * Verifying Kubernetes components...
	I0605 18:05:25.649261  499748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:05:25.682530  499748 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0605 18:05:25.685170  499748 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 18:05:25.685190  499748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0605 18:05:25.685273  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:05:25.697086  499748 kapi.go:59] client config for test-preload-802397: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.crt", KeyFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/profiles/test-preload-802397/client.key", CAFile:"/home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13df7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0605 18:05:25.719543  499748 addons.go:228] Setting addon default-storageclass=true in "test-preload-802397"
	W0605 18:05:25.719566  499748 addons.go:237] addon default-storageclass should already be in state true
	I0605 18:05:25.719633  499748 host.go:66] Checking if "test-preload-802397" exists ...
	I0605 18:05:25.720235  499748 cli_runner.go:164] Run: docker container inspect test-preload-802397 --format={{.State.Status}}
	I0605 18:05:25.721487  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:05:25.759576  499748 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0605 18:05:25.759597  499748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0605 18:05:25.759660  499748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-802397
	I0605 18:05:25.794722  499748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33243 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/test-preload-802397/id_rsa Username:docker}
	I0605 18:05:25.807820  499748 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0605 18:05:25.807895  499748 node_ready.go:35] waiting up to 6m0s for node "test-preload-802397" to be "Ready" ...
	I0605 18:05:25.824846  499748 node_ready.go:49] node "test-preload-802397" has status "Ready":"True"
	I0605 18:05:25.824876  499748 node_ready.go:38] duration metric: took 16.961767ms waiting for node "test-preload-802397" to be "Ready" ...
	I0605 18:05:25.824887  499748 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 18:05:25.884563  499748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0605 18:05:25.932063  499748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0605 18:05:26.030989  499748 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:26.179327  499748 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0605 18:05:26.181916  499748 addons.go:499] enable addons completed in 546.288257ms: enabled=[storage-provisioner default-storageclass]
	I0605 18:05:26.424791  499748 pod_ready.go:92] pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:26.424817  499748 pod_ready.go:81] duration metric: took 393.740462ms waiting for pod "coredns-6d4b75cb6d-w5w69" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:26.424829  499748 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:26.825109  499748 pod_ready.go:92] pod "etcd-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:26.825133  499748 pod_ready.go:81] duration metric: took 400.295231ms waiting for pod "etcd-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:26.825149  499748 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:27.225616  499748 pod_ready.go:92] pod "kube-apiserver-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:27.225643  499748 pod_ready.go:81] duration metric: took 400.485836ms waiting for pod "kube-apiserver-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:27.225656  499748 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:27.624941  499748 pod_ready.go:92] pod "kube-controller-manager-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:27.624964  499748 pod_ready.go:81] duration metric: took 399.300091ms waiting for pod "kube-controller-manager-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:27.624976  499748 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f9rwh" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:28.025409  499748 pod_ready.go:92] pod "kube-proxy-f9rwh" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:28.025437  499748 pod_ready.go:81] duration metric: took 400.453648ms waiting for pod "kube-proxy-f9rwh" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:28.025449  499748 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:28.424861  499748 pod_ready.go:92] pod "kube-scheduler-test-preload-802397" in "kube-system" namespace has status "Ready":"True"
	I0605 18:05:28.424891  499748 pod_ready.go:81] duration metric: took 399.428894ms waiting for pod "kube-scheduler-test-preload-802397" in "kube-system" namespace to be "Ready" ...
	I0605 18:05:28.424904  499748 pod_ready.go:38] duration metric: took 2.600006002s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0605 18:05:28.424921  499748 api_server.go:52] waiting for apiserver process to appear ...
	I0605 18:05:28.424985  499748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 18:05:28.438706  499748 api_server.go:72] duration metric: took 2.794257239s to wait for apiserver process to appear ...
	I0605 18:05:28.438733  499748 api_server.go:88] waiting for apiserver healthz status ...
	I0605 18:05:28.438750  499748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:05:28.448113  499748 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0605 18:05:28.449120  499748 api_server.go:141] control plane version: v1.24.4
	I0605 18:05:28.449143  499748 api_server.go:131] duration metric: took 10.402517ms to wait for apiserver health ...
	I0605 18:05:28.449153  499748 system_pods.go:43] waiting for kube-system pods to appear ...
	I0605 18:05:28.627936  499748 system_pods.go:59] 8 kube-system pods found
	I0605 18:05:28.628025  499748 system_pods.go:61] "coredns-6d4b75cb6d-w5w69" [d4ef6772-0830-445b-a40a-bc3b1fff8819] Running
	I0605 18:05:28.628050  499748 system_pods.go:61] "etcd-test-preload-802397" [6d35fd61-11b3-4f8e-83ab-536d3925c17c] Running
	I0605 18:05:28.628066  499748 system_pods.go:61] "kindnet-27r2t" [49d7f7e1-2685-4def-875d-18f00939cf48] Running
	I0605 18:05:28.628086  499748 system_pods.go:61] "kube-apiserver-test-preload-802397" [c66560c4-f905-4f0b-83f8-c0ecbcda124e] Running
	I0605 18:05:28.628099  499748 system_pods.go:61] "kube-controller-manager-test-preload-802397" [819503fe-6e57-416c-9e66-5fdd3e0320a3] Running
	I0605 18:05:28.628105  499748 system_pods.go:61] "kube-proxy-f9rwh" [7742d5e3-e571-4423-96dc-2ffdcd149632] Running
	I0605 18:05:28.628110  499748 system_pods.go:61] "kube-scheduler-test-preload-802397" [33364c2d-66ea-4f36-9909-1c3b08e27877] Running
	I0605 18:05:28.628116  499748 system_pods.go:61] "storage-provisioner" [162c770a-63de-407b-a4a2-708208a41226] Running
	I0605 18:05:28.628125  499748 system_pods.go:74] duration metric: took 178.966719ms to wait for pod list to return data ...
	I0605 18:05:28.628140  499748 default_sa.go:34] waiting for default service account to be created ...
	I0605 18:05:28.824040  499748 default_sa.go:45] found service account: "default"
	I0605 18:05:28.824063  499748 default_sa.go:55] duration metric: took 195.916415ms for default service account to be created ...
	I0605 18:05:28.824074  499748 system_pods.go:116] waiting for k8s-apps to be running ...
	I0605 18:05:29.028905  499748 system_pods.go:86] 8 kube-system pods found
	I0605 18:05:29.028939  499748 system_pods.go:89] "coredns-6d4b75cb6d-w5w69" [d4ef6772-0830-445b-a40a-bc3b1fff8819] Running
	I0605 18:05:29.028946  499748 system_pods.go:89] "etcd-test-preload-802397" [6d35fd61-11b3-4f8e-83ab-536d3925c17c] Running
	I0605 18:05:29.028951  499748 system_pods.go:89] "kindnet-27r2t" [49d7f7e1-2685-4def-875d-18f00939cf48] Running
	I0605 18:05:29.028956  499748 system_pods.go:89] "kube-apiserver-test-preload-802397" [c66560c4-f905-4f0b-83f8-c0ecbcda124e] Running
	I0605 18:05:29.028962  499748 system_pods.go:89] "kube-controller-manager-test-preload-802397" [819503fe-6e57-416c-9e66-5fdd3e0320a3] Running
	I0605 18:05:29.028971  499748 system_pods.go:89] "kube-proxy-f9rwh" [7742d5e3-e571-4423-96dc-2ffdcd149632] Running
	I0605 18:05:29.028976  499748 system_pods.go:89] "kube-scheduler-test-preload-802397" [33364c2d-66ea-4f36-9909-1c3b08e27877] Running
	I0605 18:05:29.028981  499748 system_pods.go:89] "storage-provisioner" [162c770a-63de-407b-a4a2-708208a41226] Running
	I0605 18:05:29.028987  499748 system_pods.go:126] duration metric: took 204.908194ms to wait for k8s-apps to be running ...
	I0605 18:05:29.028994  499748 system_svc.go:44] waiting for kubelet service to be running ....
	I0605 18:05:29.029053  499748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 18:05:29.043732  499748 system_svc.go:56] duration metric: took 14.725578ms WaitForService to wait for kubelet.
	I0605 18:05:29.043760  499748 kubeadm.go:581] duration metric: took 3.399318141s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0605 18:05:29.043791  499748 node_conditions.go:102] verifying NodePressure condition ...
	I0605 18:05:29.229891  499748 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0605 18:05:29.229921  499748 node_conditions.go:123] node cpu capacity is 2
	I0605 18:05:29.229933  499748 node_conditions.go:105] duration metric: took 186.136609ms to run NodePressure ...
	I0605 18:05:29.229945  499748 start.go:228] waiting for startup goroutines ...
	I0605 18:05:29.229951  499748 start.go:233] waiting for cluster config update ...
	I0605 18:05:29.229961  499748 start.go:242] writing updated cluster config ...
	I0605 18:05:29.230256  499748 ssh_runner.go:195] Run: rm -f paused
	I0605 18:05:29.292236  499748 start.go:573] kubectl: 1.27.2, cluster: 1.24.4 (minor skew: 3)
	I0605 18:05:29.295110  499748 out.go:177] 
	W0605 18:05:29.297481  499748 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0605 18:05:29.299658  499748 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0605 18:05:29.301967  499748 out.go:177] * Done! kubectl is now configured to use "test-preload-802397" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 05 18:04:48 test-preload-802397 crio[598]: time="2023-06-05 18:04:48.700640603Z" level=info msg="Started container" PID=1341 containerID=f7bf2762cd9ea26914769e0fa6f612c8eeab92eb1ff290c84bc6bf1d2e3489f7 description=kube-system/storage-provisioner/storage-provisioner id=cce35d35-a1bd-432d-96c6-395badd37ee3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d93a72de3582e4d4a5ea1799fb6d1c7dc6634349ff045ac7ae5064057814e3cf
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.674498402Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.679594204Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.679640013Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.679665006Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.684455389Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.684499229Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.684525485Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.690926869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.690962823Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.690978799Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.695175616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jun 05 18:05:18 test-preload-802397 crio[598]: time="2023-06-05 18:05:18.695214098Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:05:18 test-preload-802397 conmon[1317]: conmon f7bf2762cd9ea2691476 <ninfo>: container 1341 exited with status 1
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.491352443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9835b52a-eb8e-4b4d-bbd4-c52362944d20 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.491597981Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938],Size_:29035622,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9835b52a-eb8e-4b4d-bbd4-c52362944d20 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.492686939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cfb57ea8-2476-4421-b483-0ad2fdc38d74 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.492916994Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938],Size_:29035622,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cfb57ea8-2476-4421-b483-0ad2fdc38d74 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.493652500Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fad83781-90bb-4a0d-9314-a1f89cc95bd3 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.493743397Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.506520325Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/800195b9c2933643992a166c9a58f698a12c9bed84b08fe62b82d636058fcd95/merged/etc/passwd: no such file or directory"
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.506696095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/800195b9c2933643992a166c9a58f698a12c9bed84b08fe62b82d636058fcd95/merged/etc/group: no such file or directory"
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.575464144Z" level=info msg="Created container 4fa233bbc20facb3e53b94fe9bd2b8d58df1cf1f517280cb089f141bd1014acb: kube-system/storage-provisioner/storage-provisioner" id=fad83781-90bb-4a0d-9314-a1f89cc95bd3 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.576394268Z" level=info msg="Starting container: 4fa233bbc20facb3e53b94fe9bd2b8d58df1cf1f517280cb089f141bd1014acb" id=2fd0b65e-3522-44bd-aba1-6aa28b3c771a name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 18:05:19 test-preload-802397 crio[598]: time="2023-06-05 18:05:19.590906939Z" level=info msg="Started container" PID=1596 containerID=4fa233bbc20facb3e53b94fe9bd2b8d58df1cf1f517280cb089f141bd1014acb description=kube-system/storage-provisioner/storage-provisioner id=2fd0b65e-3522-44bd-aba1-6aa28b3c771a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d93a72de3582e4d4a5ea1799fb6d1c7dc6634349ff045ac7ae5064057814e3cf
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4fa233bbc20fa       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51   11 seconds ago      Running             storage-provisioner       2                   d93a72de3582e       storage-provisioner
	de93f16a924a7       bd8cc6d58247078a865774b7f516f8afc3ac8cd080fd49650ca30ef2fbc6ebd1   42 seconds ago      Running             kube-proxy                1                   94499bc8ae4a2       kube-proxy-f9rwh
	f7bf2762cd9ea       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51   42 seconds ago      Exited              storage-provisioner       1                   d93a72de3582e       storage-provisioner
	318888ee47293       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   42 seconds ago      Running             kindnet-cni               1                   471882fbfdcf1       kindnet-27r2t
	c073843a0e366       edaa71f2aee883484133da046954ad70fd6bf1fa42e5aec3f7dae199c626299c   42 seconds ago      Running             coredns                   1                   1b512a43adf36       coredns-6d4b75cb6d-w5w69
	cbe9e0d55a49f       a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a   49 seconds ago      Running             etcd                      1                   ad1b212408f4a       etcd-test-preload-802397
	1dd440f73238f       5753e4610b3ec0ac100c3535b8d8a7507b3d031148e168c2c3c4b0f389976074   49 seconds ago      Running             kube-scheduler            1                   672e2d4bd9bc7       kube-scheduler-test-preload-802397
	8785b597942da       3767741e7fba72f328a8500a18ef34481343eb78697e31ae5bf3e390a28317ae   49 seconds ago      Running             kube-apiserver            1                   db99733d0c30d       kube-apiserver-test-preload-802397
	4d2e38e8ce97a       81a4a8a4ac639bdd7e118359417a80cab1a0d0e4737eb735714cf7f8b15dc0c7   49 seconds ago      Running             kube-controller-manager   1                   9b5a383566701       kube-controller-manager-test-preload-802397
	
	* 
	* ==> coredns [c073843a0e3667d383a66075775f1ea473a989dd7d9fbc4e4bac123db5f2f395] <==
	* [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c452237b08d4ce46c54c803341046308
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:52951 - 44114 "HINFO IN 5106469081547104529.2242825384746329167. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024190094s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-802397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=test-preload-802397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=test-preload-802397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T18_03_38_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 18:03:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-802397
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 18:05:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 18:04:47 +0000   Mon, 05 Jun 2023 18:03:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 18:04:47 +0000   Mon, 05 Jun 2023 18:03:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 18:04:47 +0000   Mon, 05 Jun 2023 18:03:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 18:04:47 +0000   Mon, 05 Jun 2023 18:03:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    test-preload-802397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e8b5fae92294537827920aadf0ed6b8
	  System UUID:                f717c163-a983-4e0b-b839-251f2dce98f8
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-w5w69                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     102s
	  kube-system                 etcd-test-preload-802397                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                 kindnet-27r2t                                  100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      103s
	  kube-system                 kube-apiserver-test-preload-802397             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-test-preload-802397    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-proxy-f9rwh                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-scheduler-test-preload-802397             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 42s                  kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x5 over 2m5s)  kubelet          Node test-preload-802397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x5 over 2m5s)  kubelet          Node test-preload-802397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x4 over 2m5s)  kubelet          Node test-preload-802397 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node test-preload-802397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node test-preload-802397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node test-preload-802397 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           103s                 node-controller  Node test-preload-802397 event: Registered Node test-preload-802397 in Controller
	  Normal  NodeReady                94s                  kubelet          Node test-preload-802397 status is now: NodeReady
	  Normal  Starting                 51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)    kubelet          Node test-preload-802397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)    kubelet          Node test-preload-802397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)    kubelet          Node test-preload-802397 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                  node-controller  Node test-preload-802397 event: Registered Node test-preload-802397 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001075] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +0.004539] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000e785b4d1
	[  +0.001078] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000005f019a4a
	[  +0.001044] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +3.062644] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000061ba42e8
	[  +0.001124] FS-Cache: O-key=[8] 'd0d1c90000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001040] FS-Cache: N-key=[8] 'd0d1c90000000000'
	[  +0.324591] FS-Cache: Duplicate cookie detected
	[  +0.000707] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000004b485b91
	[  +0.001042] FS-Cache: O-key=[8] 'd6d1c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000003ad92423
	[  +0.001057] FS-Cache: N-key=[8] 'd6d1c90000000000'
	
	* 
	* ==> etcd [cbe9e0d55a49f73001fd5f80208361f614d43b3d7058b6715016e5519d1c35df] <==
	* {"level":"info","ts":"2023-06-05T18:04:41.288Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-06-05T18:04:41.289Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-06-05T18:04:41.305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-06-05T18:04:41.305Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-06-05T18:04:41.305Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:04:41.305Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:04:41.309Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-05T18:04:41.309Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T18:04:41.309Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-05T18:04:41.309Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-06-05T18:04:41.309Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-06-05T18:04:43.130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-06-05T18:04:43.134Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:test-preload-802397 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-05T18:04:43.134Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T18:04:43.134Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T18:04:43.141Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-05T18:04:43.149Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-06-05T18:04:43.156Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-05T18:04:43.156Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  18:05:31 up  2:47,  0 users,  load average: 1.77, 1.70, 1.83
	Linux test-preload-802397 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [318888ee4729305585c04ae2d2887be611252712841bbeedd334d78eff609767] <==
	* I0605 18:04:48.333957       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0605 18:04:48.334020       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0605 18:04:48.334139       1 main.go:116] setting mtu 1500 for CNI 
	I0605 18:04:48.334149       1 main.go:146] kindnetd IP family: "ipv4"
	I0605 18:04:48.334162       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0605 18:05:18.660169       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0605 18:05:18.674152       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0605 18:05:18.674201       1 main.go:227] handling current node
	I0605 18:05:28.693973       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0605 18:05:28.694001       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [8785b597942dacd415b231cbab1245392f791fdf5efd7d9f1a58d6453eef169c] <==
	* I0605 18:04:46.871766       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0605 18:04:46.871806       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0605 18:04:46.871844       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0605 18:04:46.858593       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0605 18:04:46.859464       1 controller.go:83] Starting OpenAPI AggregationController
	I0605 18:04:46.849048       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0605 18:04:46.994935       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0605 18:04:47.003524       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0605 18:04:47.024939       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0605 18:04:47.049934       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0605 18:04:47.076633       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0605 18:04:47.077529       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0605 18:04:47.084959       1 cache.go:39] Caches are synced for autoregister controller
	I0605 18:04:47.085233       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0605 18:04:47.085797       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0605 18:04:47.442035       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0605 18:04:47.862948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0605 18:04:48.938148       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0605 18:04:49.484869       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0605 18:04:49.615301       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0605 18:04:49.626406       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0605 18:04:49.685552       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0605 18:04:49.692551       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0605 18:04:59.436060       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0605 18:04:59.538856       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [4d2e38e8ce97ae08d4276dd2ce471661d28460b8a819fb2df41d3e79acd9743e] <==
	* I0605 18:04:59.480626       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0605 18:04:59.481907       1 shared_informer.go:262] Caches are synced for cronjob
	I0605 18:04:59.485489       1 shared_informer.go:262] Caches are synced for PVC protection
	I0605 18:04:59.487749       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0605 18:04:59.489990       1 shared_informer.go:262] Caches are synced for service account
	I0605 18:04:59.496465       1 shared_informer.go:262] Caches are synced for HPA
	I0605 18:04:59.501728       1 shared_informer.go:262] Caches are synced for job
	I0605 18:04:59.501946       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0605 18:04:59.504101       1 shared_informer.go:262] Caches are synced for daemon sets
	I0605 18:04:59.506409       1 shared_informer.go:262] Caches are synced for stateful set
	I0605 18:04:59.506518       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0605 18:04:59.526076       1 shared_informer.go:262] Caches are synced for endpoint
	I0605 18:04:59.624328       1 shared_informer.go:262] Caches are synced for taint
	I0605 18:04:59.624461       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0605 18:04:59.624592       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-802397. Assuming now as a timestamp.
	I0605 18:04:59.624648       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0605 18:04:59.624732       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0605 18:04:59.624974       1 event.go:294] "Event occurred" object="test-preload-802397" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-802397 event: Registered Node test-preload-802397 in Controller"
	I0605 18:04:59.663313       1 shared_informer.go:262] Caches are synced for resource quota
	I0605 18:04:59.674005       1 shared_informer.go:262] Caches are synced for disruption
	I0605 18:04:59.674053       1 disruption.go:371] Sending events to api server.
	I0605 18:04:59.678920       1 shared_informer.go:262] Caches are synced for resource quota
	I0605 18:05:00.087391       1 shared_informer.go:262] Caches are synced for garbage collector
	I0605 18:05:00.087431       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0605 18:05:00.132784       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [de93f16a924a787637472b54ed2f92344f6b8ece8d4094af415deb10b5659820] <==
	* I0605 18:04:48.903648       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0605 18:04:48.903748       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0605 18:04:48.903788       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0605 18:04:48.930953       1 server_others.go:206] "Using iptables Proxier"
	I0605 18:04:48.930997       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0605 18:04:48.931007       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0605 18:04:48.931021       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0605 18:04:48.931090       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0605 18:04:48.931216       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0605 18:04:48.931434       1 server.go:661] "Version info" version="v1.24.4"
	I0605 18:04:48.931449       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:04:48.932728       1 config.go:317] "Starting service config controller"
	I0605 18:04:48.932780       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0605 18:04:48.932851       1 config.go:226] "Starting endpoint slice config controller"
	I0605 18:04:48.932870       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0605 18:04:48.934708       1 config.go:444] "Starting node config controller"
	I0605 18:04:48.934730       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0605 18:04:49.033455       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0605 18:04:49.033547       1 shared_informer.go:262] Caches are synced for service config
	I0605 18:04:49.035009       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1dd440f73238fc4f87dcad0d74abe95dc0dd43d3fc9f47cdc34e832b34ef8426] <==
	* I0605 18:04:44.540627       1 serving.go:348] Generated self-signed cert in-memory
	W0605 18:04:46.955842       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0605 18:04:46.956003       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0605 18:04:46.956044       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0605 18:04:46.956084       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0605 18:04:47.011006       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0605 18:04:47.011039       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:04:47.012558       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0605 18:04:47.012735       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0605 18:04:47.012782       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 18:04:47.012830       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0605 18:04:47.113252       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.090747     907 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-802397"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.260876     907 apiserver.go:52] "Watching apiserver"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.269310     907 topology_manager.go:200] "Topology Admit Handler"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.269472     907 topology_manager.go:200] "Topology Admit Handler"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.269541     907 topology_manager.go:200] "Topology Admit Handler"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.269589     907 topology_manager.go:200] "Topology Admit Handler"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352623     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmw9\" (UniqueName: \"kubernetes.io/projected/49d7f7e1-2685-4def-875d-18f00939cf48-kube-api-access-dcmw9\") pod \"kindnet-27r2t\" (UID: \"49d7f7e1-2685-4def-875d-18f00939cf48\") " pod="kube-system/kindnet-27r2t"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352697     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/162c770a-63de-407b-a4a2-708208a41226-tmp\") pod \"storage-provisioner\" (UID: \"162c770a-63de-407b-a4a2-708208a41226\") " pod="kube-system/storage-provisioner"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352728     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqpt\" (UniqueName: \"kubernetes.io/projected/162c770a-63de-407b-a4a2-708208a41226-kube-api-access-ksqpt\") pod \"storage-provisioner\" (UID: \"162c770a-63de-407b-a4a2-708208a41226\") " pod="kube-system/storage-provisioner"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352758     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw7l8\" (UniqueName: \"kubernetes.io/projected/7742d5e3-e571-4423-96dc-2ffdcd149632-kube-api-access-tw7l8\") pod \"kube-proxy-f9rwh\" (UID: \"7742d5e3-e571-4423-96dc-2ffdcd149632\") " pod="kube-system/kube-proxy-f9rwh"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352790     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d7f7e1-2685-4def-875d-18f00939cf48-lib-modules\") pod \"kindnet-27r2t\" (UID: \"49d7f7e1-2685-4def-875d-18f00939cf48\") " pod="kube-system/kindnet-27r2t"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352818     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9zq5\" (UniqueName: \"kubernetes.io/projected/d4ef6772-0830-445b-a40a-bc3b1fff8819-kube-api-access-p9zq5\") pod \"coredns-6d4b75cb6d-w5w69\" (UID: \"d4ef6772-0830-445b-a40a-bc3b1fff8819\") " pod="kube-system/coredns-6d4b75cb6d-w5w69"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352843     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7742d5e3-e571-4423-96dc-2ffdcd149632-xtables-lock\") pod \"kube-proxy-f9rwh\" (UID: \"7742d5e3-e571-4423-96dc-2ffdcd149632\") " pod="kube-system/kube-proxy-f9rwh"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352868     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/49d7f7e1-2685-4def-875d-18f00939cf48-cni-cfg\") pod \"kindnet-27r2t\" (UID: \"49d7f7e1-2685-4def-875d-18f00939cf48\") " pod="kube-system/kindnet-27r2t"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352893     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7742d5e3-e571-4423-96dc-2ffdcd149632-lib-modules\") pod \"kube-proxy-f9rwh\" (UID: \"7742d5e3-e571-4423-96dc-2ffdcd149632\") " pod="kube-system/kube-proxy-f9rwh"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352918     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4ef6772-0830-445b-a40a-bc3b1fff8819-config-volume\") pod \"coredns-6d4b75cb6d-w5w69\" (UID: \"d4ef6772-0830-445b-a40a-bc3b1fff8819\") " pod="kube-system/coredns-6d4b75cb6d-w5w69"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352944     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7742d5e3-e571-4423-96dc-2ffdcd149632-kube-proxy\") pod \"kube-proxy-f9rwh\" (UID: \"7742d5e3-e571-4423-96dc-2ffdcd149632\") " pod="kube-system/kube-proxy-f9rwh"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352969     907 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49d7f7e1-2685-4def-875d-18f00939cf48-xtables-lock\") pod \"kindnet-27r2t\" (UID: \"49d7f7e1-2685-4def-875d-18f00939cf48\") " pod="kube-system/kindnet-27r2t"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: I0605 18:04:47.352982     907 reconciler.go:159] "Reconciler: start to sync state"
	Jun 05 18:04:47 test-preload-802397 kubelet[907]: W0605 18:04:47.924407     907 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/crio/crio-1b512a43adf36e9badf4a10963c54afb6b719f4d84295536994e43c8e8f0e9aa WatchSource:0}: Error finding container 1b512a43adf36e9badf4a10963c54afb6b719f4d84295536994e43c8e8f0e9aa: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0x4000e31530 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x7e6400) %!!(MISSING)s(func() error=0x7e6500)}
	Jun 05 18:04:48 test-preload-802397 kubelet[907]: W0605 18:04:48.188741     907 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/crio/crio-471882fbfdcf12f3046fa0d6ba6387be52f2d49d4a894ffc88dab62b81a053e9 WatchSource:0}: Error finding container 471882fbfdcf12f3046fa0d6ba6387be52f2d49d4a894ffc88dab62b81a053e9: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0x4000ee81b0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x7e6400) %!!(MISSING)s(func() error=0x7e6500)}
	Jun 05 18:04:48 test-preload-802397 kubelet[907]: W0605 18:04:48.500038     907 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/2b9cc6c9c5f69dbf2e9f79818bcd723716cc1ef90629e17d2b2225ec521c94dc/crio/crio-d93a72de3582e4d4a5ea1799fb6d1c7dc6634349ff045ac7ae5064057814e3cf WatchSource:0}: Error finding container d93a72de3582e4d4a5ea1799fb6d1c7dc6634349ff045ac7ae5064057814e3cf: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0x4000ee9f50 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x7e6400) %!!(MISSING)s(func() error=0x7e6500)}
	Jun 05 18:04:49 test-preload-802397 kubelet[907]: I0605 18:04:49.437665     907 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
	Jun 05 18:04:54 test-preload-802397 kubelet[907]: I0605 18:04:54.823797     907 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
	Jun 05 18:05:19 test-preload-802397 kubelet[907]: I0605 18:05:19.490738     907 scope.go:110] "RemoveContainer" containerID="f7bf2762cd9ea26914769e0fa6f612c8eeab92eb1ff290c84bc6bf1d2e3489f7"
	
	* 
	* ==> storage-provisioner [4fa233bbc20facb3e53b94fe9bd2b8d58df1cf1f517280cb089f141bd1014acb] <==
	* I0605 18:05:19.604709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0605 18:05:19.621502       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0605 18:05:19.621662       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [f7bf2762cd9ea26914769e0fa6f612c8eeab92eb1ff290c84bc6bf1d2e3489f7] <==
	* I0605 18:04:48.757771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0605 18:05:18.760159       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p test-preload-802397 -n test-preload-802397
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-802397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-802397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-802397
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-802397: (2.470996252s)
--- FAIL: TestPreload (183.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.1253318640.exe start -p running-upgrade-783662 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.1253318640.exe start -p running-upgrade-783662 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.18638406s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-783662 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-783662 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.942389188s)

                                                
                                                
-- stdout --
	* [running-upgrade-783662] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-783662 in cluster running-upgrade-783662
	* Pulling base image ...
	* Updating the running docker "running-upgrade-783662" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:12:58.812656  529215 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:12:58.812775  529215 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:12:58.812785  529215 out.go:309] Setting ErrFile to fd 2...
	I0605 18:12:58.812790  529215 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:12:58.813004  529215 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:12:58.813477  529215 out.go:303] Setting JSON to false
	I0605 18:12:58.814678  529215 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10511,"bootTime":1685978268,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:12:58.814755  529215 start.go:137] virtualization:  
	I0605 18:12:58.819167  529215 out.go:177] * [running-upgrade-783662] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:12:58.822728  529215 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:12:58.824892  529215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:12:58.823414  529215 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0605 18:12:58.826315  529215 notify.go:220] Checking for updates...
	I0605 18:12:58.831398  529215 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:12:58.834772  529215 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:12:58.836899  529215 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:12:58.839483  529215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:12:58.841857  529215 config.go:182] Loaded profile config "running-upgrade-783662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0605 18:12:58.844572  529215 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0605 18:12:58.847098  529215 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:12:58.891946  529215 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:12:58.892062  529215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:12:59.014647  529215 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0605 18:12:59.022093  529215 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-06-05 18:12:59.005221078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:12:59.022228  529215 docker.go:294] overlay module found
	I0605 18:12:59.024848  529215 out.go:177] * Using the docker driver based on existing profile
	I0605 18:12:59.026899  529215 start.go:297] selected driver: docker
	I0605 18:12:59.026925  529215 start.go:875] validating driver "docker" against &{Name:running-upgrade-783662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-783662 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.47 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
	I0605 18:12:59.027042  529215 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:12:59.027894  529215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:12:59.106549  529215 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-06-05 18:12:59.096324653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:12:59.106917  529215 cni.go:84] Creating CNI manager for ""
	I0605 18:12:59.106936  529215 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:12:59.106946  529215 start_flags.go:319] config:
	{Name:running-upgrade-783662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-783662 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.47 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:12:59.110134  529215 out.go:177] * Starting control plane node running-upgrade-783662 in cluster running-upgrade-783662
	I0605 18:12:59.112087  529215 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 18:12:59.114050  529215 out.go:177] * Pulling base image ...
	I0605 18:12:59.116211  529215 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0605 18:12:59.116412  529215 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0605 18:12:59.138580  529215 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0605 18:12:59.138605  529215 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0605 18:12:59.189082  529215 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0605 18:12:59.189246  529215 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/running-upgrade-783662/config.json ...
	I0605 18:12:59.189321  529215 cache.go:107] acquiring lock: {Name:mke7d9c39614b8aa3703697d7ecb327c1115ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189411  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0605 18:12:59.189421  529215 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.692µs
	I0605 18:12:59.189430  529215 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0605 18:12:59.189437  529215 cache.go:107] acquiring lock: {Name:mk9b077cbb162a3def5f13efe6aec1090d859929 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189468  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0605 18:12:59.189473  529215 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.063µs
	I0605 18:12:59.189480  529215 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0605 18:12:59.189486  529215 cache.go:107] acquiring lock: {Name:mkf89e75c0602e1c252968b56ff8cc4bd72441ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189506  529215 cache.go:195] Successfully downloaded all kic artifacts
	I0605 18:12:59.189512  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0605 18:12:59.189517  529215 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.73µs
	I0605 18:12:59.189523  529215 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0605 18:12:59.189527  529215 start.go:364] acquiring machines lock for running-upgrade-783662: {Name:mk9a721fbf37ca8a133657823bf4aa0f1e64da60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189529  529215 cache.go:107] acquiring lock: {Name:mk6574677fe8d67704e4a0832798ac373de775db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189553  529215 cache.go:107] acquiring lock: {Name:mk3d85b25fac1906d9bfbd666052fd791466bb90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189571  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0605 18:12:59.189579  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0605 18:12:59.189582  529215 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 50.15µs
	I0605 18:12:59.189592  529215 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0605 18:12:59.189584  529215 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 32.049µs
	I0605 18:12:59.189601  529215 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0605 18:12:59.189593  529215 cache.go:107] acquiring lock: {Name:mk77a30a50f7f51e51b4de59ba184166d3e94bc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189611  529215 cache.go:107] acquiring lock: {Name:mk714af935d78082ce046109683e24eddb0f6ddd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189630  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0605 18:12:59.189636  529215 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 43.749µs
	I0605 18:12:59.189642  529215 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0605 18:12:59.189650  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0605 18:12:59.189654  529215 start.go:368] acquired machines lock for "running-upgrade-783662" in 114.823µs
	I0605 18:12:59.189655  529215 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 45.53µs
	I0605 18:12:59.189661  529215 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0605 18:12:59.189668  529215 start.go:96] Skipping create...Using existing machine configuration
	I0605 18:12:59.189667  529215 cache.go:107] acquiring lock: {Name:mkecc658453dc6ee92263a587abd071789ce9754 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:12:59.189693  529215 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0605 18:12:59.189701  529215 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 30.909µs
	I0605 18:12:59.189707  529215 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0605 18:12:59.189711  529215 cache.go:87] Successfully saved all images to host disk.
	I0605 18:12:59.189674  529215 fix.go:55] fixHost starting: 
	I0605 18:12:59.189967  529215 cli_runner.go:164] Run: docker container inspect running-upgrade-783662 --format={{.State.Status}}
	I0605 18:12:59.208210  529215 fix.go:103] recreateIfNeeded on running-upgrade-783662: state=Running err=<nil>
	W0605 18:12:59.208238  529215 fix.go:129] unexpected machine state, will restart: <nil>
	I0605 18:12:59.211632  529215 out.go:177] * Updating the running docker "running-upgrade-783662" container ...
	I0605 18:12:59.213672  529215 machine.go:88] provisioning docker machine ...
	I0605 18:12:59.213699  529215 ubuntu.go:169] provisioning hostname "running-upgrade-783662"
	I0605 18:12:59.213776  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:12:59.232206  529215 main.go:141] libmachine: Using SSH client type: native
	I0605 18:12:59.232683  529215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33308 <nil> <nil>}
	I0605 18:12:59.232702  529215 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-783662 && echo "running-upgrade-783662" | sudo tee /etc/hostname
	I0605 18:12:59.393884  529215 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-783662
	
	I0605 18:12:59.393975  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:12:59.412114  529215 main.go:141] libmachine: Using SSH client type: native
	I0605 18:12:59.412559  529215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33308 <nil> <nil>}
	I0605 18:12:59.412577  529215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-783662' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-783662/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-783662' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 18:12:59.557736  529215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 18:12:59.557758  529215 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 18:12:59.557784  529215 ubuntu.go:177] setting up certificates
	I0605 18:12:59.557797  529215 provision.go:83] configureAuth start
	I0605 18:12:59.557864  529215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-783662
	I0605 18:12:59.581679  529215 provision.go:138] copyHostCerts
	I0605 18:12:59.581745  529215 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 18:12:59.581771  529215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 18:12:59.581848  529215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 18:12:59.581958  529215 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 18:12:59.581963  529215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 18:12:59.581990  529215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 18:12:59.582041  529215 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 18:12:59.582046  529215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 18:12:59.582069  529215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 18:12:59.582112  529215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-783662 san=[192.168.70.47 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-783662]
	I0605 18:12:59.777910  529215 provision.go:172] copyRemoteCerts
	I0605 18:12:59.777987  529215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 18:12:59.778038  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:12:59.802930  529215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/running-upgrade-783662/id_rsa Username:docker}
	I0605 18:12:59.905459  529215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 18:12:59.945389  529215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0605 18:12:59.974742  529215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 18:13:00.007246  529215 provision.go:86] duration metric: configureAuth took 449.404641ms
	I0605 18:13:00.007279  529215 ubuntu.go:193] setting minikube options for container-runtime
	I0605 18:13:00.007526  529215 config.go:182] Loaded profile config "running-upgrade-783662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0605 18:13:00.008760  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:13:00.081801  529215 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:00.082603  529215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33308 <nil> <nil>}
	I0605 18:13:00.082634  529215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 18:13:00.764678  529215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 18:13:00.764766  529215 machine.go:91] provisioned docker machine in 1.551078958s
	I0605 18:13:00.764823  529215 start.go:300] post-start starting for "running-upgrade-783662" (driver="docker")
	I0605 18:13:00.764854  529215 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 18:13:00.764948  529215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 18:13:00.765007  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:13:00.795883  529215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/running-upgrade-783662/id_rsa Username:docker}
	I0605 18:13:00.902131  529215 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 18:13:00.906412  529215 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 18:13:00.906438  529215 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 18:13:00.906450  529215 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 18:13:00.906457  529215 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0605 18:13:00.906469  529215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 18:13:00.906527  529215 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 18:13:00.906620  529215 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 18:13:00.906736  529215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 18:13:00.916032  529215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:13:00.951151  529215 start.go:303] post-start completed in 186.290433ms
	I0605 18:13:00.951254  529215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:13:00.951306  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:13:00.973250  529215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/running-upgrade-783662/id_rsa Username:docker}
	I0605 18:13:01.076404  529215 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 18:13:01.083835  529215 fix.go:57] fixHost completed within 1.894152784s
	I0605 18:13:01.083856  529215 start.go:83] releasing machines lock for "running-upgrade-783662", held for 1.894195558s
	I0605 18:13:01.083946  529215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-783662
	I0605 18:13:01.106780  529215 ssh_runner.go:195] Run: cat /version.json
	I0605 18:13:01.106839  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:13:01.106791  529215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 18:13:01.106999  529215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-783662
	I0605 18:13:01.153161  529215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/running-upgrade-783662/id_rsa Username:docker}
	I0605 18:13:01.184824  529215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/running-upgrade-783662/id_rsa Username:docker}
	W0605 18:13:01.292666  529215 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0605 18:13:01.292754  529215 ssh_runner.go:195] Run: systemctl --version
	I0605 18:13:01.382594  529215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 18:13:01.536792  529215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 18:13:01.544265  529215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:13:01.581383  529215 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 18:13:01.581517  529215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:13:01.654453  529215 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0605 18:13:01.654477  529215 start.go:481] detecting cgroup driver to use...
	I0605 18:13:01.654535  529215 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 18:13:01.654617  529215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 18:13:01.737802  529215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 18:13:01.756151  529215 docker.go:193] disabling cri-docker service (if available) ...
	I0605 18:13:01.756251  529215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 18:13:01.778302  529215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 18:13:01.793652  529215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0605 18:13:01.823699  529215 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0605 18:13:01.823809  529215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 18:13:02.052465  529215 docker.go:209] disabling docker service ...
	I0605 18:13:02.052592  529215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 18:13:02.088740  529215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 18:13:02.150179  529215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 18:13:02.467182  529215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 18:13:02.622116  529215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 18:13:02.635328  529215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:13:02.658464  529215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0605 18:13:02.658535  529215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:13:02.675588  529215 out.go:177] 
	W0605 18:13:02.677258  529215 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0605 18:13:02.677287  529215 out.go:239] * 
	* 
	W0605 18:13:02.678266  529215 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0605 18:13:02.679952  529215 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-783662 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-05 18:13:02.710855255 +0000 UTC m=+2550.526862240
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-783662
helpers_test.go:235: (dbg) docker inspect running-upgrade-783662:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4fb4afb1a76a170f7b11804a26313022a9aafc0b3b8d6e0969a92db26605bbb",
	        "Created": "2023-06-05T18:12:09.313947864Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 525738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T18:12:09.927734124Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/c4fb4afb1a76a170f7b11804a26313022a9aafc0b3b8d6e0969a92db26605bbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4fb4afb1a76a170f7b11804a26313022a9aafc0b3b8d6e0969a92db26605bbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4fb4afb1a76a170f7b11804a26313022a9aafc0b3b8d6e0969a92db26605bbb/hosts",
	        "LogPath": "/var/lib/docker/containers/c4fb4afb1a76a170f7b11804a26313022a9aafc0b3b8d6e0969a92db26605bbb/c4fb4afb1a76a170f7b11804a26313022a9aafc0b3b8d6e0969a92db26605bbb-json.log",
	        "Name": "/running-upgrade-783662",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-783662:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-783662",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6fc15d291dcd141bdcb6a6147e098120372f83f7df05aa52e29ddb292783ae68-init/diff:/var/lib/docker/overlay2/252f4aa680b64095d7918bfc60bfbd38a8cbfd2429b698ae185d93dbb15a2bc5/diff:/var/lib/docker/overlay2/b8427ff981214858ac0d3062bcfaec3015c39ab4a2643a7806fb0e56a7b7c5ca/diff:/var/lib/docker/overlay2/dfe7c320971f04bf2a3b2dda995ab99d41c8339304056fe14cc31fc1165d9be5/diff:/var/lib/docker/overlay2/7af43579405ab95388d0b53c03275244a05c15f75aa997f91090b20d2520db28/diff:/var/lib/docker/overlay2/fb08ed39580d1188080b2c028c61fca21451814ca2f2b8709c9cebd6528cac3b/diff:/var/lib/docker/overlay2/a461cb1ed0f03dbe97a5dda69d1c0b8a263e26c77a5d3e7f6afb9f86e4431271/diff:/var/lib/docker/overlay2/8e97bafb5e081812a022d70489b0063ce6206c6f1c4eceae14f83168942af9bf/diff:/var/lib/docker/overlay2/1a92ee1fb5f1061f8d116e2cf2896cea39ac268ab410893093be0e28799f537d/diff:/var/lib/docker/overlay2/b1493c48162cfb5eaa2ca739c105d8a306bfd9fd7c3869ac3e597dbcfaefd655/diff:/var/lib/docker/overlay2/3c96a9
5dacf11c48a9dacbf8989e7161004443fdb0793d245ad2bd6739115996/diff:/var/lib/docker/overlay2/37a8bc5e499a355fc2dbfe40839990bce8d8b7862f86529996e72e4a36087b11/diff:/var/lib/docker/overlay2/6764cde192e95463c8190cabc90acb10b3f820586603f8d534e46d56545a8edd/diff:/var/lib/docker/overlay2/40b6e56c9b17af27a68f74b0e713c8129ecb2f47fea9619a490b8add87b1b3ba/diff:/var/lib/docker/overlay2/5c5b1a0574bd9b4f183dddb7ddf81a7842d873c76c6277d2224cd6c7484cee98/diff:/var/lib/docker/overlay2/4189de23263c2bdaf814ca2dc7dfd5ff264fae821691b4051e7c618fa45dfdc6/diff:/var/lib/docker/overlay2/9cd28dee04405d1c5894fb2e0e38e70defd19be42caae7f8e0d9e99db34f03f2/diff:/var/lib/docker/overlay2/e29b64b795bd0739cdfccabe17030b9ed8c517d59f09222afadb444f2df92086/diff:/var/lib/docker/overlay2/1e781dc0b0069281bfbe18af42b23ef34eabdd9603d55f6c977a8e5412f46719/diff:/var/lib/docker/overlay2/bd0e4bc06b8cf24d08138ecd471ca8a8bc6c39a944e349a2d209e6e91cd76914/diff:/var/lib/docker/overlay2/02192f162cd3505ac4673d58b8337ee57aa234262ee327aff8ceb10710daa7b7/diff:/var/lib/d
ocker/overlay2/14e86609f240f8edc377f5c8e3e6cd32d0ef1791828e444c7e0f3bce72614043/diff:/var/lib/docker/overlay2/b4a21fb5b8007b40be9abb9c5eec9a4fe9d21c7a4577e9aedddc7ebe559ab9a5/diff:/var/lib/docker/overlay2/0ca736b64db58b44c08cb95758368c2537a6f3047e9fa86ac2b94e1b52ecd549/diff:/var/lib/docker/overlay2/7d9d4132aa3f21946bf12cbcc1a3d93c55354ae1f819646d1de0e92c418fc22f/diff:/var/lib/docker/overlay2/ba82554608f7a5cef3680cea2964a39dca80913a412c74d8987838e47f576335/diff:/var/lib/docker/overlay2/000b74f942b9729accc7c35b9d04d012baef66d5257f14fcb72983d683ab4298/diff:/var/lib/docker/overlay2/aa8dee9521075cf5f94d1a7f290023cb1fd0564586bfff8a76c585ca73bf65d8/diff:/var/lib/docker/overlay2/8590be585add1c39b4544b4b94294aa9d4d982ad5cb005d3b0131244a5430e51/diff:/var/lib/docker/overlay2/d9bebd5cab264b992d261410677ead253e4d63aae8b98bfa77e6bf8528a1d288/diff:/var/lib/docker/overlay2/300e34f6aa508c0adda44bfac88b6c4ecf26d8e1d53b4ac471549be1224d5c16/diff:/var/lib/docker/overlay2/a3ad194457575390b327febfba62b8f25be21ab2ea7a26ac926d8db3947
f11c8/diff:/var/lib/docker/overlay2/1f4aea0993a76c5b30f77acd79358d083ab9efc738c1d9ca7eac1737f97fb0b6/diff:/var/lib/docker/overlay2/1f3e036b54cb19aa46e20080336c2311e1db68691d6202a8e843a5b0c11ba7bf/diff:/var/lib/docker/overlay2/f33fd043d2b7ffdb11dcf685089604eaf336bccfee0154550412f509bd890641/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fc15d291dcd141bdcb6a6147e098120372f83f7df05aa52e29ddb292783ae68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fc15d291dcd141bdcb6a6147e098120372f83f7df05aa52e29ddb292783ae68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fc15d291dcd141bdcb6a6147e098120372f83f7df05aa52e29ddb292783ae68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-783662",
	                "Source": "/var/lib/docker/volumes/running-upgrade-783662/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-783662",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-783662",
	                "name.minikube.sigs.k8s.io": "running-upgrade-783662",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6a16f90eeba73f5447d1536d3ec4a367f9296dacb51ec2b473a81e82c8eaa4c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33308"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33307"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33306"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33305"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d6a16f90eeba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-783662": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.47"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c4fb4afb1a76",
	                        "running-upgrade-783662"
	                    ],
	                    "NetworkID": "9fe009b4f49e07ff0752561491b73b9d105111982277e22ce59d0903fd599b19",
	                    "EndpointID": "6a422d787be924dd8b466a3428b06a670e1c308a4aca8eea52a64ef24839347f",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.47",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:2f",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-783662 -n running-upgrade-783662
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-783662 -n running-upgrade-783662: exit status 4 (573.58878ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0605 18:13:03.264081  529903 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-783662" does not appear in /home/jenkins/minikube-integration/16634-402421/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-783662" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-783662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-783662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-783662: (2.972780695s)
--- FAIL: TestRunningBinaryUpgrade (71.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.1676890140.exe start -p missing-upgrade-870219 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.1676890140.exe start -p missing-upgrade-870219 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (1m24.467102249s)

                                                
                                                
-- stdout --
	! [missing-upgrade-870219] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-870219
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...
	* Deleting "missing-upgrade-870219" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7834MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-870219" running: temporary error created container "missing-upgrade-870219" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-870219" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-870219" running: temporary error created container "missing-upgrade-870219" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.1676890140.exe start -p missing-upgrade-870219 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.1676890140.exe start -p missing-upgrade-870219 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (9.010308657s)

                                                
                                                
-- stdout --
	* [missing-upgrade-870219] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-870219
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-870219" ...
	* Restarting existing docker container for "missing-upgrade-870219" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-870219", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-870219" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-870219", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.1676890140.exe start -p missing-upgrade-870219 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.1676890140.exe start -p missing-upgrade-870219 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (6.829788762s)

                                                
                                                
-- stdout --
	* [missing-upgrade-870219] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-870219
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-870219" ...
	* Restarting existing docker container for "missing-upgrade-870219" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-870219", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-870219" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-870219", output 
	template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-06-05 18:09:23.942891306 +0000 UTC m=+2331.758898291
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-870219
helpers_test.go:235: (dbg) docker inspect missing-upgrade-870219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24e0798d0135b81643de4545dead9e016fc6bad858dd466e7c89fad0ed32632b",
	        "Created": "2023-06-05T18:08:45.702448493Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2023-06-05T18:09:23.693700095Z",
	            "FinishedAt": "2023-06-05T18:09:23.69218079Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/24e0798d0135b81643de4545dead9e016fc6bad858dd466e7c89fad0ed32632b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24e0798d0135b81643de4545dead9e016fc6bad858dd466e7c89fad0ed32632b/hostname",
	        "HostsPath": "/var/lib/docker/containers/24e0798d0135b81643de4545dead9e016fc6bad858dd466e7c89fad0ed32632b/hosts",
	        "LogPath": "/var/lib/docker/containers/24e0798d0135b81643de4545dead9e016fc6bad858dd466e7c89fad0ed32632b/24e0798d0135b81643de4545dead9e016fc6bad858dd466e7c89fad0ed32632b-json.log",
	        "Name": "/missing-upgrade-870219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-870219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc31f5118e93c40ae687b61601897a4560c775b98951ab84948a718750c0fea6-init/diff:/var/lib/docker/overlay2/44987721d370c7aaaaa77ff2d15d0b7b55a5affbbcef342b16b25a2ccbbb5133/diff:/var/lib/docker/overlay2/b7d0b911e7338a8e6ca6927117a12f85aab2cbc132d43e1700b61fb687a203f1/diff:/var/lib/docker/overlay2/83152bc017054fb2b29e989066f8a85e23caf777ca5248f466d5f33fa1f7243a/diff:/var/lib/docker/overlay2/8c27a7891b3daf3817a69eaa5d9a26b5b18fc866caf083e09d209bbbe21acf78/diff:/var/lib/docker/overlay2/2dc5d32389eac0fd6cdd465bf3398b2486d43c6e3921dbb0c2d0206076d4017e/diff:/var/lib/docker/overlay2/f740bfebea50dad454cc8ab0d57a096c5d04c41471760646532b1b9343462c0d/diff:/var/lib/docker/overlay2/204f1a145636ecfc05e51d2443cb2106ceb51abac8defa846ad0e4326791af0b/diff:/var/lib/docker/overlay2/9127890f44b9bd9f0ff712b73fc307248c7a0d3f16eac7bf1c596d71a72f20d1/diff:/var/lib/docker/overlay2/d620e0da87809bcd6e95a55c5089e7e56164fbfef1aa0b186d1207c24f9f8de8/diff:/var/lib/docker/overlay2/c949d9
b1b520ac1d58be5089c7888f3d3f00c0db0f0d63723233526ab7f03b58/diff:/var/lib/docker/overlay2/37bfdd772fcf752ccdd374e82a65a770f64c55153611ef90a704a60be426b7fd/diff:/var/lib/docker/overlay2/6a7c28f624c4e175dfbafdefeb4d588aa6be615f5f99d1c3f6ea21fc54229c15/diff:/var/lib/docker/overlay2/8077cbe7fccfa41016d701b24cf7ec5bfc3c8e96adfe505e43a13ba2dcef46e2/diff:/var/lib/docker/overlay2/49b2e6b041b991dcf582bec29e71df96ae4a07e84bda8d4f07455852803becca/diff:/var/lib/docker/overlay2/6887e85aec9fb21c920bf4e9bfafa2652513c83e1d9086dc408a22eb1432036a/diff:/var/lib/docker/overlay2/bd12f42d56efc7ba711e268e0a53f8f8b9414cc8d33946b0d009aad4fda6e2fe/diff:/var/lib/docker/overlay2/00fb3fd2c04bdca414b75a56009773c2945ada64d0d243a6f0b70cd65d94e02b/diff:/var/lib/docker/overlay2/282f941a9c2591d8ec9c8e695a7a3f009df7508517b1ff3646cd46584dcdf067/diff:/var/lib/docker/overlay2/3eda9b2ef5cef4fd1b087d6a1e0cd94dcbd980c12f40808d8d2d76b2ac89f02b/diff:/var/lib/docker/overlay2/52c75860fda37f3ca3728eb2a0a4faa12dce4116cd4d1b22176eda08cea48d43/diff:/var/lib/d
ocker/overlay2/ffff561332e4a8067f5b00d6686f958458574a566a8da12581d6ca0ae93c76f0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc31f5118e93c40ae687b61601897a4560c775b98951ab84948a718750c0fea6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc31f5118e93c40ae687b61601897a4560c775b98951ab84948a718750c0fea6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc31f5118e93c40ae687b61601897a4560c775b98951ab84948a718750c0fea6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-870219",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-870219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-870219",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-870219",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-870219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebf5cf00214ac878ecc37c8e16ca3f5b092cdcf055298e988c8073f4718085c0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/ebf5cf00214a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "cd37cc07364736a940f7a7fec7cdff19c74defbf2ada125330fd7cce8bb59a42",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-870219 -n missing-upgrade-870219
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-870219 -n missing-upgrade-870219: exit status 7 (157.366757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-870219" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-870219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-870219
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-870219: (3.922830923s)
--- FAIL: TestMissingContainerUpgrade (108.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.3753043744.exe start -p stopped-upgrade-266335 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0605 18:09:50.697687  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.3753043744.exe start -p stopped-upgrade-266335 --memory=2200 --vm-driver=docker  --container-runtime=crio: (2m3.939018776s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.3753043744.exe -p stopped-upgrade-266335 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.3753043744.exe -p stopped-upgrade-266335 stop: (2.014495452s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-266335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0605 18:11:49.165465  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 18:11:50.282347  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-266335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (15.93491978s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-266335] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-266335 in cluster stopped-upgrade-266335
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-266335" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:11:35.400735  523070 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:11:35.400920  523070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:11:35.400930  523070 out.go:309] Setting ErrFile to fd 2...
	I0605 18:11:35.400935  523070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:11:35.401092  523070 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:11:35.401865  523070 out.go:303] Setting JSON to false
	I0605 18:11:35.402917  523070 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10428,"bootTime":1685978268,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:11:35.402991  523070 start.go:137] virtualization:  
	I0605 18:11:35.407204  523070 out.go:177] * [stopped-upgrade-266335] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:11:35.409306  523070 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:11:35.409418  523070 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0605 18:11:35.409452  523070 notify.go:220] Checking for updates...
	I0605 18:11:35.413221  523070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:11:35.415269  523070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:11:35.417180  523070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:11:35.419062  523070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:11:35.421129  523070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:11:35.423500  523070 config.go:182] Loaded profile config "stopped-upgrade-266335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0605 18:11:35.425864  523070 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0605 18:11:35.427470  523070 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:11:35.451611  523070 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:11:35.451712  523070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:11:35.532670  523070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-06-05 18:11:35.522593727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:11:35.532793  523070 docker.go:294] overlay module found
	I0605 18:11:35.542115  523070 out.go:177] * Using the docker driver based on existing profile
	I0605 18:11:35.560743  523070 start.go:297] selected driver: docker
	I0605 18:11:35.560781  523070 start.go:875] validating driver "docker" against &{Name:stopped-upgrade-266335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-266335 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.96 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP:}
	I0605 18:11:35.560908  523070 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:11:35.561543  523070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:11:35.624325  523070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-06-05 18:11:35.614303156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:11:35.624644  523070 cni.go:84] Creating CNI manager for ""
	I0605 18:11:35.624669  523070 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:11:35.624681  523070 start_flags.go:319] config:
	{Name:stopped-upgrade-266335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-266335 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.96 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:11:35.626932  523070 out.go:177] * Starting control plane node stopped-upgrade-266335 in cluster stopped-upgrade-266335
	I0605 18:11:35.629744  523070 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 18:11:35.631864  523070 out.go:177] * Pulling base image ...
	I0605 18:11:35.633640  523070 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0605 18:11:35.633854  523070 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0605 18:11:35.653480  523070 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0605 18:11:35.653652  523070 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0605 18:11:35.655468  523070 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0605 18:11:35.716017  523070 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0605 18:11:35.716183  523070 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/stopped-upgrade-266335/config.json ...
	I0605 18:11:35.720976  523070 cache.go:107] acquiring lock: {Name:mke7d9c39614b8aa3703697d7ecb327c1115ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.721642  523070 cache.go:115] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0605 18:11:35.721653  523070 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 692.168µs
	I0605 18:11:35.721663  523070 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0605 18:11:35.721672  523070 cache.go:107] acquiring lock: {Name:mk9b077cbb162a3def5f13efe6aec1090d859929 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.721780  523070 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0605 18:11:35.721947  523070 cache.go:107] acquiring lock: {Name:mkf89e75c0602e1c252968b56ff8cc4bd72441ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.722076  523070 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0605 18:11:35.722309  523070 cache.go:107] acquiring lock: {Name:mk6574677fe8d67704e4a0832798ac373de775db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.722412  523070 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0605 18:11:35.722585  523070 cache.go:107] acquiring lock: {Name:mk3d85b25fac1906d9bfbd666052fd791466bb90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.722715  523070 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0605 18:11:35.722908  523070 cache.go:107] acquiring lock: {Name:mk77a30a50f7f51e51b4de59ba184166d3e94bc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.723052  523070 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0605 18:11:35.723235  523070 cache.go:107] acquiring lock: {Name:mk714af935d78082ce046109683e24eddb0f6ddd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.723342  523070 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0605 18:11:35.724271  523070 cache.go:107] acquiring lock: {Name:mkecc658453dc6ee92263a587abd071789ce9754 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:35.724425  523070 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0605 18:11:35.724790  523070 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0605 18:11:35.725291  523070 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0605 18:11:35.725548  523070 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0605 18:11:35.725670  523070 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0605 18:11:35.725710  523070 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0605 18:11:35.725779  523070 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0605 18:11:35.725854  523070 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	W0605 18:11:36.180433  523070 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0605 18:11:36.180579  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0605 18:11:36.181015  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0605 18:11:36.187327  523070 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0605 18:11:36.187424  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0605 18:11:36.191692  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0605 18:11:36.198349  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0605 18:11:36.219127  523070 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0605 18:11:36.219205  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0605 18:11:36.232260  523070 cache.go:162] opening:  /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0605 18:11:36.348475  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0605 18:11:36.348501  523070 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 625.59735ms
	I0605 18:11:36.348515  523070 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  385.34 KiB / 287.99 MiB [] 0.13% ? p/s ?I0605 18:11:36.776210  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0605 18:11:36.778730  523070 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.054456079s
	I0605 18:11:36.778759  523070 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  15.60 MiB / 287.99 MiB [>] 5.42% ? p/s ?I0605 18:11:36.848104  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0605 18:11:36.848126  523070 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.125822848s
	I0605 18:11:36.848142  523070 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.25 MiB I0605 18:11:37.091219  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0605 18:11:37.091296  523070 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.369622135s
	I0605 18:11:37.091326  523070 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0605 18:11:37.135533  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0605 18:11:37.135578  523070 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.413651416s
	I0605 18:11:37.135598  523070 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.25 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.25 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 40.46 MiB I0605 18:11:37.693117  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0605 18:11:37.693143  523070 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.97056152s
	I0605 18:11:37.693157  523070 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 40.46 MiB     > gcr.io/k8s-minikube/kicbase...:  40.10 MiB / 287.99 MiB  13.92% 40.46 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 39.78 MiB    > gcr.io/k8s-minikube/kicbase...:  61.81 MiB / 287.99 MiB  21.46% 39.78 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 39.78 MiB    > gcr.io/k8s-minikube/kicbase...:  75.79 MiB / 287.99 MiB  26.32% 40.64 MiB    > gcr.io/k8s-minikube/kicbase...:  86.56 MiB / 287.99 MiB  30.06% 40.64 MiB    > gcr.io/k8s-minikube/kicbase...:  99.79 MiB / 287.99 MiB  34.65% 40.64 MiB    > gcr.io/k8s-minikube/kicbase...:  107.79 MiB / 287.99 MiB  37.43% 41.46 MiI0605 18:11:39.399619  523070 cache.go:157] /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0605 18:11:39.399643  523070 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.676412169s
	I0605 18:11:39.399656  523070 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0605 18:11:39.399667  523070 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  121.28 MiB / 287.99 MiB  42.11% 41.46 Mi    > gcr.io/k8s-minikube/kicbase...:  138.86 MiB / 287.99 MiB  48.22% 41.46 Mi    > gcr.io/k8s-minikube/kicbase...:  157.93 MiB / 287.99 MiB  54.84% 44.18 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 44.18 Mi    > gcr.io/k8s-minikube/kicbase...:  179.65 MiB / 287.99 MiB  62.38% 44.18 Mi    > gcr.io/k8s-minikube/kicbase...:  191.99 MiB / 287.99 MiB  66.67% 44.99 Mi    > gcr.io/k8s-minikube/kicbase...:  208.44 MiB / 287.99 MiB  72.38% 44.99 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 44.99 Mi    > gcr.io/k8s-minikube/kicbase...:  222.81 MiB / 287.99 MiB  77.37% 45.40 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 45.40 Mi    > gcr.io/k8s-minikube/kicbase...:  241.23 MiB / 287.99 MiB  83.76% 45.40 Mi    > gcr.io/k8s-minikube/kicbase...:  256.43 MiB / 287.99 MiB  89.04% 46.09 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.
03% 46.09 Mi    > gcr.io/k8s-minikube/kicbase...:  281.13 MiB / 287.99 MiB  97.62% 46.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 46.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 43.50 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 40.70 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 40.70 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 38.50 MI0605 18:11:43.873162  523070 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0605 18:11:43.873172  523070 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0605 18:11:44.883808  523070 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0605 18:11:44.883849  523070 cache.go:195] Successfully downloaded all kic artifacts
	I0605 18:11:44.884669  523070 start.go:364] acquiring machines lock for stopped-upgrade-266335: {Name:mkc4eedbe5afbeee2201f771310e2612bf8527eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:11:44.884776  523070 start.go:368] acquired machines lock for "stopped-upgrade-266335" in 67.684µs
	I0605 18:11:44.884804  523070 start.go:96] Skipping create...Using existing machine configuration
	I0605 18:11:44.886069  523070 fix.go:55] fixHost starting: 
	I0605 18:11:44.886402  523070 cli_runner.go:164] Run: docker container inspect stopped-upgrade-266335 --format={{.State.Status}}
	I0605 18:11:44.904528  523070 fix.go:103] recreateIfNeeded on stopped-upgrade-266335: state=Stopped err=<nil>
	W0605 18:11:44.904557  523070 fix.go:129] unexpected machine state, will restart: <nil>
	I0605 18:11:44.907773  523070 out.go:177] * Restarting existing docker container for "stopped-upgrade-266335" ...
	I0605 18:11:44.909432  523070 cli_runner.go:164] Run: docker start stopped-upgrade-266335
	I0605 18:11:45.413521  523070 cli_runner.go:164] Run: docker container inspect stopped-upgrade-266335 --format={{.State.Status}}
	I0605 18:11:45.438548  523070 kic.go:426] container "stopped-upgrade-266335" state is running.
	I0605 18:11:45.438972  523070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-266335
	I0605 18:11:45.466363  523070 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/stopped-upgrade-266335/config.json ...
	I0605 18:11:45.466606  523070 machine.go:88] provisioning docker machine ...
	I0605 18:11:45.466622  523070 ubuntu.go:169] provisioning hostname "stopped-upgrade-266335"
	I0605 18:11:45.466676  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:45.489175  523070 main.go:141] libmachine: Using SSH client type: native
	I0605 18:11:45.489629  523070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33304 <nil> <nil>}
	I0605 18:11:45.489646  523070 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-266335 && echo "stopped-upgrade-266335" | sudo tee /etc/hostname
	I0605 18:11:45.490306  523070 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0605 18:11:48.645953  523070 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-266335
	
	I0605 18:11:48.646032  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:48.666519  523070 main.go:141] libmachine: Using SSH client type: native
	I0605 18:11:48.666960  523070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33304 <nil> <nil>}
	I0605 18:11:48.666984  523070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-266335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-266335/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-266335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 18:11:48.813247  523070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 18:11:48.813270  523070 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 18:11:48.813288  523070 ubuntu.go:177] setting up certificates
	I0605 18:11:48.813297  523070 provision.go:83] configureAuth start
	I0605 18:11:48.813360  523070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-266335
	I0605 18:11:48.833523  523070 provision.go:138] copyHostCerts
	I0605 18:11:48.833600  523070 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 18:11:48.833612  523070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 18:11:48.833691  523070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 18:11:48.833799  523070 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 18:11:48.833809  523070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 18:11:48.833837  523070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 18:11:48.833893  523070 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 18:11:48.833902  523070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 18:11:48.833927  523070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 18:11:48.833976  523070 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-266335 san=[192.168.59.96 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-266335]
	I0605 18:11:49.184978  523070 provision.go:172] copyRemoteCerts
	I0605 18:11:49.185048  523070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 18:11:49.185089  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:49.204168  523070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/stopped-upgrade-266335/id_rsa Username:docker}
	I0605 18:11:49.307208  523070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 18:11:49.333923  523070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0605 18:11:49.359048  523070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0605 18:11:49.384508  523070 provision.go:86] duration metric: configureAuth took 571.176828ms
	I0605 18:11:49.384537  523070 ubuntu.go:193] setting minikube options for container-runtime
	I0605 18:11:49.384729  523070 config.go:182] Loaded profile config "stopped-upgrade-266335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0605 18:11:49.384852  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:49.421767  523070 main.go:141] libmachine: Using SSH client type: native
	I0605 18:11:49.422206  523070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33304 <nil> <nil>}
	I0605 18:11:49.422228  523070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 18:11:49.905074  523070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 18:11:49.905097  523070 machine.go:91] provisioned docker machine in 4.438480505s
	I0605 18:11:49.905109  523070 start.go:300] post-start starting for "stopped-upgrade-266335" (driver="docker")
	I0605 18:11:49.905116  523070 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 18:11:49.905201  523070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 18:11:49.905266  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:49.938170  523070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/stopped-upgrade-266335/id_rsa Username:docker}
	I0605 18:11:50.042583  523070 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 18:11:50.047761  523070 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 18:11:50.047787  523070 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 18:11:50.047800  523070 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 18:11:50.047806  523070 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0605 18:11:50.047816  523070 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 18:11:50.047881  523070 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 18:11:50.048025  523070 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 18:11:50.048146  523070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 18:11:50.058072  523070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:11:50.091006  523070 start.go:303] post-start completed in 185.880108ms
	I0605 18:11:50.091112  523070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:11:50.091170  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:50.115846  523070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/stopped-upgrade-266335/id_rsa Username:docker}
	I0605 18:11:50.231144  523070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 18:11:50.238585  523070 fix.go:57] fixHost completed within 5.353760508s
	I0605 18:11:50.238648  523070 start.go:83] releasing machines lock for "stopped-upgrade-266335", held for 5.353857944s
	I0605 18:11:50.238760  523070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-266335
	I0605 18:11:50.266655  523070 ssh_runner.go:195] Run: cat /version.json
	I0605 18:11:50.266706  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:50.266929  523070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 18:11:50.267004  523070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-266335
	I0605 18:11:50.313571  523070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/stopped-upgrade-266335/id_rsa Username:docker}
	I0605 18:11:50.327307  523070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/stopped-upgrade-266335/id_rsa Username:docker}
	W0605 18:11:50.421921  523070 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0605 18:11:50.422028  523070 ssh_runner.go:195] Run: systemctl --version
	I0605 18:11:50.552617  523070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 18:11:50.671030  523070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 18:11:50.677631  523070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:11:50.714466  523070 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 18:11:50.714544  523070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:11:50.753049  523070 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0605 18:11:50.753069  523070 start.go:481] detecting cgroup driver to use...
	I0605 18:11:50.753102  523070 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 18:11:50.753160  523070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 18:11:50.786171  523070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 18:11:50.798819  523070 docker.go:193] disabling cri-docker service (if available) ...
	I0605 18:11:50.798880  523070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 18:11:50.811212  523070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 18:11:50.823726  523070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0605 18:11:50.837421  523070 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0605 18:11:50.837492  523070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 18:11:50.956142  523070 docker.go:209] disabling docker service ...
	I0605 18:11:50.956227  523070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 18:11:50.970280  523070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 18:11:50.983190  523070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 18:11:51.095691  523070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 18:11:51.217109  523070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 18:11:51.232151  523070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:11:51.251686  523070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0605 18:11:51.251811  523070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:11:51.266386  523070 out.go:177] 
	W0605 18:11:51.268538  523070 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0605 18:11:51.268567  523070 out.go:239] * 
	* 
	W0605 18:11:51.272045  523070 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0605 18:11:51.274844  523070 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-266335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (141.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-845789 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-845789 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.118886357s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-845789] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-845789 in cluster pause-845789
	* Pulling base image ...
	* Updating the running docker "pause-845789" container ...
	* Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-845789" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:13:56.327023  534165 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:13:56.327590  534165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:13:56.327627  534165 out.go:309] Setting ErrFile to fd 2...
	I0605 18:13:56.327646  534165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:13:56.327910  534165 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:13:56.328423  534165 out.go:303] Setting JSON to false
	I0605 18:13:56.329593  534165 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10569,"bootTime":1685978268,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:13:56.329708  534165 start.go:137] virtualization:  
	I0605 18:13:56.333205  534165 out.go:177] * [pause-845789] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:13:56.334899  534165 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:13:56.334980  534165 notify.go:220] Checking for updates...
	I0605 18:13:56.338065  534165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:13:56.340268  534165 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:13:56.342451  534165 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:13:56.345648  534165 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:13:56.347654  534165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:13:56.350169  534165 config.go:182] Loaded profile config "pause-845789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:13:56.350778  534165 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:13:56.376470  534165 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:13:56.376587  534165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:13:56.463860  534165 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 18:13:56.446765441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:13:56.464079  534165 docker.go:294] overlay module found
	I0605 18:13:56.468586  534165 out.go:177] * Using the docker driver based on existing profile
	I0605 18:13:56.470790  534165 start.go:297] selected driver: docker
	I0605 18:13:56.470819  534165 start.go:875] validating driver "docker" against &{Name:pause-845789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:13:56.470972  534165 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:13:56.471083  534165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:13:56.566494  534165 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 18:13:56.555985656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:13:56.566989  534165 cni.go:84] Creating CNI manager for ""
	I0605 18:13:56.567006  534165 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:13:56.567016  534165 start_flags.go:319] config:
	{Name:pause-845789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:13:56.570292  534165 out.go:177] * Starting control plane node pause-845789 in cluster pause-845789
	I0605 18:13:56.572917  534165 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 18:13:56.575573  534165 out.go:177] * Pulling base image ...
	I0605 18:13:56.593144  534165 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 18:13:56.593219  534165 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 18:13:56.593238  534165 cache.go:57] Caching tarball of preloaded images
	I0605 18:13:56.593146  534165 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 18:13:56.593321  534165 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 18:13:56.593330  534165 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 18:13:56.593514  534165 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/config.json ...
	I0605 18:13:56.612268  534165 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 18:13:56.612293  534165 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 18:13:56.612312  534165 cache.go:195] Successfully downloaded all kic artifacts
	I0605 18:13:56.612351  534165 start.go:364] acquiring machines lock for pause-845789: {Name:mkf6594e26c6c435e062262a00f3d6751fc0f3d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:13:56.612442  534165 start.go:368] acquired machines lock for "pause-845789" in 57.961µs
	I0605 18:13:56.612468  534165 start.go:96] Skipping create...Using existing machine configuration
	I0605 18:13:56.612478  534165 fix.go:55] fixHost starting: 
	I0605 18:13:56.612752  534165 cli_runner.go:164] Run: docker container inspect pause-845789 --format={{.State.Status}}
	I0605 18:13:56.630894  534165 fix.go:103] recreateIfNeeded on pause-845789: state=Running err=<nil>
	W0605 18:13:56.630923  534165 fix.go:129] unexpected machine state, will restart: <nil>
	I0605 18:13:56.633491  534165 out.go:177] * Updating the running docker "pause-845789" container ...
	I0605 18:13:56.635673  534165 machine.go:88] provisioning docker machine ...
	I0605 18:13:56.635706  534165 ubuntu.go:169] provisioning hostname "pause-845789"
	I0605 18:13:56.635827  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:56.660656  534165 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:56.661204  534165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I0605 18:13:56.661223  534165 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-845789 && echo "pause-845789" | sudo tee /etc/hostname
	I0605 18:13:56.827776  534165 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-845789
	
	I0605 18:13:56.827866  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:56.851641  534165 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:56.852246  534165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I0605 18:13:56.852277  534165 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-845789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-845789/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-845789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 18:13:56.994052  534165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 18:13:56.994078  534165 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 18:13:56.994114  534165 ubuntu.go:177] setting up certificates
	I0605 18:13:56.994125  534165 provision.go:83] configureAuth start
	I0605 18:13:56.994193  534165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845789
	I0605 18:13:57.018499  534165 provision.go:138] copyHostCerts
	I0605 18:13:57.018595  534165 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 18:13:57.018604  534165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 18:13:57.018684  534165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 18:13:57.018782  534165 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 18:13:57.018787  534165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 18:13:57.018815  534165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 18:13:57.018867  534165 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 18:13:57.018871  534165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 18:13:57.018895  534165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 18:13:57.018937  534165 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.pause-845789 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-845789]
	I0605 18:13:57.805381  534165 provision.go:172] copyRemoteCerts
	I0605 18:13:57.805471  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 18:13:57.805528  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:57.830171  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:13:57.932249  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 18:13:57.962633  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0605 18:13:57.994556  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 18:13:58.027378  534165 provision.go:86] duration metric: configureAuth took 1.033206632s
	I0605 18:13:58.027410  534165 ubuntu.go:193] setting minikube options for container-runtime
	I0605 18:13:58.027674  534165 config.go:182] Loaded profile config "pause-845789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:13:58.027840  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:58.047198  534165 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:58.047636  534165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I0605 18:13:58.047657  534165 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 18:14:03.542393  534165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 18:14:03.542418  534165 machine.go:91] provisioned docker machine in 6.906724296s
	I0605 18:14:03.542429  534165 start.go:300] post-start starting for "pause-845789" (driver="docker")
	I0605 18:14:03.542436  534165 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 18:14:03.542502  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 18:14:03.542549  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.562739  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:03.671703  534165 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 18:14:03.676225  534165 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 18:14:03.676268  534165 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 18:14:03.676285  534165 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 18:14:03.676292  534165 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 18:14:03.676308  534165 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 18:14:03.676373  534165 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 18:14:03.676462  534165 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 18:14:03.676574  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 18:14:03.688093  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:14:03.718267  534165 start.go:303] post-start completed in 175.823334ms
	I0605 18:14:03.718350  534165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:14:03.718393  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.737392  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:03.834822  534165 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 18:14:03.841415  534165 fix.go:57] fixHost completed within 7.228929022s
	I0605 18:14:03.841441  534165 start.go:83] releasing machines lock for "pause-845789", held for 7.228985547s
	I0605 18:14:03.841525  534165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845789
	I0605 18:14:03.860268  534165 ssh_runner.go:195] Run: cat /version.json
	I0605 18:14:03.860279  534165 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 18:14:03.860327  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.860329  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.889402  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:03.904059  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:04.129785  534165 ssh_runner.go:195] Run: systemctl --version
	I0605 18:14:04.135993  534165 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 18:14:04.287895  534165 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 18:14:04.296503  534165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:14:04.311964  534165 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 18:14:04.312136  534165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:14:04.331537  534165 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0605 18:14:04.331625  534165 start.go:481] detecting cgroup driver to use...
	I0605 18:14:04.331685  534165 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 18:14:04.331770  534165 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 18:14:04.354092  534165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 18:14:04.374054  534165 docker.go:193] disabling cri-docker service (if available) ...
	I0605 18:14:04.374213  534165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 18:14:04.416882  534165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 18:14:04.505815  534165 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 18:14:04.893109  534165 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 18:14:05.263372  534165 docker.go:209] disabling docker service ...
	I0605 18:14:05.263494  534165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 18:14:05.302641  534165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 18:14:05.332231  534165 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 18:14:05.553230  534165 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 18:14:05.816090  534165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 18:14:05.868579  534165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:14:05.946631  534165 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0605 18:14:05.946742  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:05.983326  534165 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 18:14:05.983432  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:06.021664  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:06.040303  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:06.089592  534165 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 18:14:06.131352  534165 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 18:14:06.167713  534165 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 18:14:06.206901  534165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:14:06.514724  534165 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 18:14:16.139807  534165 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.625046719s)
	I0605 18:14:16.139835  534165 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 18:14:16.139890  534165 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 18:14:16.146463  534165 start.go:549] Will wait 60s for crictl version
	I0605 18:14:16.146535  534165 ssh_runner.go:195] Run: which crictl
	I0605 18:14:16.152202  534165 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 18:14:16.215777  534165 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 18:14:16.215865  534165 ssh_runner.go:195] Run: crio --version
	I0605 18:14:16.265299  534165 ssh_runner.go:195] Run: crio --version
	I0605 18:14:16.314486  534165 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0605 18:14:16.317520  534165 cli_runner.go:164] Run: docker network inspect pause-845789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 18:14:16.335674  534165 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0605 18:14:16.340802  534165 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 18:14:16.340879  534165 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 18:14:16.390311  534165 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 18:14:16.390330  534165 crio.go:415] Images already preloaded, skipping extraction
	I0605 18:14:16.390388  534165 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 18:14:16.434732  534165 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 18:14:16.434755  534165 cache_images.go:84] Images are preloaded, skipping loading
	I0605 18:14:16.434834  534165 ssh_runner.go:195] Run: crio config
	I0605 18:14:16.493681  534165 cni.go:84] Creating CNI manager for ""
	I0605 18:14:16.493716  534165 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:14:16.493728  534165 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 18:14:16.493764  534165 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-845789 NodeName:pause-845789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 18:14:16.493977  534165 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-845789"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 18:14:16.494086  534165 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-845789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 18:14:16.494189  534165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0605 18:14:16.505882  534165 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 18:14:16.505958  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 18:14:16.516915  534165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0605 18:14:16.540373  534165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 18:14:16.563154  534165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0605 18:14:16.586542  534165 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0605 18:14:16.591444  534165 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789 for IP: 192.168.76.2
	I0605 18:14:16.591479  534165 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:14:16.591615  534165 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 18:14:16.591665  534165 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 18:14:16.591751  534165 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/client.key
	I0605 18:14:16.591822  534165 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/apiserver.key.31bdca25
	I0605 18:14:16.591868  534165 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/proxy-client.key
	I0605 18:14:16.592010  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem (1338 bytes)
	W0605 18:14:16.592044  534165 certs.go:433] ignoring /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813_empty.pem, impossibly tiny 0 bytes
	I0605 18:14:16.592060  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 18:14:16.592088  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 18:14:16.592116  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 18:14:16.592147  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 18:14:16.592196  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:14:16.592832  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 18:14:16.623762  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0605 18:14:16.661920  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 18:14:16.697772  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0605 18:14:16.731408  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 18:14:16.765072  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 18:14:16.797709  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 18:14:16.833084  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 18:14:16.867571  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /usr/share/ca-certificates/4078132.pem (1708 bytes)
	I0605 18:14:16.903190  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 18:14:16.935230  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem --> /usr/share/ca-certificates/407813.pem (1338 bytes)
	I0605 18:14:16.966804  534165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 18:14:16.991740  534165 ssh_runner.go:195] Run: openssl version
	I0605 18:14:17.000624  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4078132.pem && ln -fs /usr/share/ca-certificates/4078132.pem /etc/ssl/certs/4078132.pem"
	I0605 18:14:17.015358  534165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4078132.pem
	I0605 18:14:17.020925  534165 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 18:14:17.021018  534165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4078132.pem
	I0605 18:14:17.032511  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4078132.pem /etc/ssl/certs/3ec20f2e.0"
	I0605 18:14:17.044089  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 18:14:17.057870  534165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:14:17.063568  534165 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:14:17.063643  534165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:14:17.073249  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 18:14:17.085138  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407813.pem && ln -fs /usr/share/ca-certificates/407813.pem /etc/ssl/certs/407813.pem"
	I0605 18:14:17.097990  534165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407813.pem
	I0605 18:14:17.103818  534165 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 18:14:17.103912  534165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407813.pem
	I0605 18:14:17.113730  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407813.pem /etc/ssl/certs/51391683.0"
	I0605 18:14:17.125505  534165 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 18:14:17.130934  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0605 18:14:17.140970  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0605 18:14:17.150717  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0605 18:14:17.160039  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0605 18:14:17.170167  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0605 18:14:17.179450  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0605 18:14:17.189020  534165 kubeadm.go:404] StartCluster: {Name:pause-845789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:14:17.189203  534165 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 18:14:17.189272  534165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 18:14:17.253500  534165 cri.go:88] found id: "f5667873af88719df6b5dedf9f489eb022c51271f2cd1b3eb469e0d4dacc97a7"
	I0605 18:14:17.253524  534165 cri.go:88] found id: "c9b4f7001f3c15228d03cea251b3deb8ed506d11399610450285f98d983fda93"
	I0605 18:14:17.253537  534165 cri.go:88] found id: "4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664"
	I0605 18:14:17.253542  534165 cri.go:88] found id: "71d3d8c961e5398097e4f6db56499fa661e52ecb42ce4529773a11baf8e4738c"
	I0605 18:14:17.253546  534165 cri.go:88] found id: "f4898fe3b98d8da5dd96757f14bd45d4596b39d01316f166c39623a20ca9c09e"
	I0605 18:14:17.253551  534165 cri.go:88] found id: "9d612546a5b5f10d19269ed6d4beb3cf134ca893d0c7b8c4eac3c5d0535e98fb"
	I0605 18:14:17.253555  534165 cri.go:88] found id: "0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e"
	I0605 18:14:17.253559  534165 cri.go:88] found id: "4d5bd8c85ccb6172975ab10caa1da5a458e7ccf843a84a1de3429849770fbbe8"
	I0605 18:14:17.253563  534165 cri.go:88] found id: "85f49c943fc9ff6451d80220b5caf2e551d9aaf68d9e1a871c1d949465da7dde"
	I0605 18:14:17.253576  534165 cri.go:88] found id: "b3276bd40b155a7e405949bcc44f28d8f3463a34813f03a65d81e6b2ed7c635c"
	I0605 18:14:17.253585  534165 cri.go:88] found id: "2d74c55ee8d9ad65c7c364cb88ed89db229e28095c42b40122f8af0df70cd7e5"
	I0605 18:14:17.253590  534165 cri.go:88] found id: "8914d3d8c576306c35e741da6ce526747cf90175f3bb570e9f30860e67221a94"
	I0605 18:14:17.253594  534165 cri.go:88] found id: "6026c38e08eb3d1b7d8d26bdc1ec195ad0ac869068f3d92df6b7f454615c3a5c"
	I0605 18:14:17.253611  534165 cri.go:88] found id: "099ec7a9893d2275206ff3bf58acd010c8000628d21e04f52a37e4e26135ee5b"
	I0605 18:14:17.253617  534165 cri.go:88] found id: "c4e6052c8ddbebdc10f9279a4c704e7b5c504411d2b484170b5233330893d115"
	I0605 18:14:17.253625  534165 cri.go:88] found id: "742da03ee25f724e2ebc7481c76123716fea92ba9ea1339cfa37e54549863f52"
	I0605 18:14:17.253629  534165 cri.go:88] found id: ""
	I0605 18:14:17.253690  534165 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-845789
helpers_test.go:235: (dbg) docker inspect pause-845789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12",
	        "Created": "2023-06-05T18:13:13.694452925Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 531145,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T18:13:14.043858497Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/hostname",
	        "HostsPath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/hosts",
	        "LogPath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12-json.log",
	        "Name": "/pause-845789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-845789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-845789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-845789",
	                "Source": "/var/lib/docker/volumes/pause-845789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-845789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-845789",
	                "name.minikube.sigs.k8s.io": "pause-845789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5843eff4316420c067d3eb705229d20da8b236fb69655e8348e30e7d12d66c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33313"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33312"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33309"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33311"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33310"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5843eff4316",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-845789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bebf24cd58e9",
	                        "pause-845789"
	                    ],
	                    "NetworkID": "f3557e92d1538057696618098145c870a6c1c323dddae88497e1195c6dfdcb37",
	                    "EndpointID": "5e6b6d2666fca9bbceabc4ae6a01929f2260c550cfb85f20e81682f43628b2f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-845789 -n pause-845789
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-845789 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-845789 logs -n 25: (2.851922543s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-639332       | scheduled-stop-639332       | jenkins | v1.30.1 | 05 Jun 23 18:06 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-639332       | scheduled-stop-639332       | jenkins | v1.30.1 | 05 Jun 23 18:06 UTC | 05 Jun 23 18:06 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-639332       | scheduled-stop-639332       | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC | 05 Jun 23 18:07 UTC |
	| start   | -p insufficient-storage-035590 | insufficient-storage-035590 | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-035590 | insufficient-storage-035590 | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC | 05 Jun 23 18:07 UTC |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC | 05 Jun 23 18:08 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063572 sudo    | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063572 sudo    | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	| start   | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:09 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-870219      | missing-upgrade-870219      | jenkins | v1.30.1 | 05 Jun 23 18:09 UTC | 05 Jun 23 18:09 UTC |
	| stop    | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:09 UTC | 05 Jun 23 18:09 UTC |
	| start   | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:09 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p stopped-upgrade-266335      | stopped-upgrade-266335      | jenkins | v1.30.1 | 05 Jun 23 18:11 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p stopped-upgrade-266335      | stopped-upgrade-266335      | jenkins | v1.30.1 | 05 Jun 23 18:11 UTC | 05 Jun 23 18:11 UTC |
	| start   | -p running-upgrade-783662      | running-upgrade-783662      | jenkins | v1.30.1 | 05 Jun 23 18:12 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-783662      | running-upgrade-783662      | jenkins | v1.30.1 | 05 Jun 23 18:13 UTC | 05 Jun 23 18:13 UTC |
	| start   | -p pause-845789 --memory=2048  | pause-845789                | jenkins | v1.30.1 | 05 Jun 23 18:13 UTC | 05 Jun 23 18:13 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-845789                | pause-845789                | jenkins | v1.30.1 | 05 Jun 23 18:13 UTC | 05 Jun 23 18:14 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 18:13:56
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 18:13:56.327023  534165 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:13:56.327590  534165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:13:56.327627  534165 out.go:309] Setting ErrFile to fd 2...
	I0605 18:13:56.327646  534165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:13:56.327910  534165 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:13:56.328423  534165 out.go:303] Setting JSON to false
	I0605 18:13:56.329593  534165 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10569,"bootTime":1685978268,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:13:56.329708  534165 start.go:137] virtualization:  
	I0605 18:13:56.333205  534165 out.go:177] * [pause-845789] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:13:56.334899  534165 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:13:56.334980  534165 notify.go:220] Checking for updates...
	I0605 18:13:56.338065  534165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:13:56.340268  534165 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:13:56.342451  534165 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:13:56.345648  534165 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:13:56.347654  534165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:13:56.350169  534165 config.go:182] Loaded profile config "pause-845789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:13:56.350778  534165 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:13:56.376470  534165 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:13:56.376587  534165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:13:56.463860  534165 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 18:13:56.446765441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:13:56.464079  534165 docker.go:294] overlay module found
	I0605 18:13:56.468586  534165 out.go:177] * Using the docker driver based on existing profile
	I0605 18:13:56.470790  534165 start.go:297] selected driver: docker
	I0605 18:13:56.470819  534165 start.go:875] validating driver "docker" against &{Name:pause-845789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:13:56.470972  534165 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:13:56.471083  534165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:13:56.566494  534165 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 18:13:56.555985656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:13:56.566989  534165 cni.go:84] Creating CNI manager for ""
	I0605 18:13:56.567006  534165 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:13:56.567016  534165 start_flags.go:319] config:
	{Name:pause-845789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:13:56.570292  534165 out.go:177] * Starting control plane node pause-845789 in cluster pause-845789
	I0605 18:13:56.572917  534165 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 18:13:56.575573  534165 out.go:177] * Pulling base image ...
	I0605 18:13:56.593144  534165 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 18:13:56.593219  534165 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 18:13:56.593238  534165 cache.go:57] Caching tarball of preloaded images
	I0605 18:13:56.593146  534165 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 18:13:56.593321  534165 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 18:13:56.593330  534165 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 18:13:56.593514  534165 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/config.json ...
	I0605 18:13:56.612268  534165 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 18:13:56.612293  534165 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 18:13:56.612312  534165 cache.go:195] Successfully downloaded all kic artifacts
	I0605 18:13:56.612351  534165 start.go:364] acquiring machines lock for pause-845789: {Name:mkf6594e26c6c435e062262a00f3d6751fc0f3d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:13:56.612442  534165 start.go:368] acquired machines lock for "pause-845789" in 57.961µs
	I0605 18:13:56.612468  534165 start.go:96] Skipping create...Using existing machine configuration
	I0605 18:13:56.612478  534165 fix.go:55] fixHost starting: 
	I0605 18:13:56.612752  534165 cli_runner.go:164] Run: docker container inspect pause-845789 --format={{.State.Status}}
	I0605 18:13:56.630894  534165 fix.go:103] recreateIfNeeded on pause-845789: state=Running err=<nil>
	W0605 18:13:56.630923  534165 fix.go:129] unexpected machine state, will restart: <nil>
	I0605 18:13:56.633491  534165 out.go:177] * Updating the running docker "pause-845789" container ...
	I0605 18:13:54.622125  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:13:54.622575  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:13:54.622637  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 18:13:54.622735  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 18:13:54.668550  517520 cri.go:88] found id: "7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:13:54.668571  517520 cri.go:88] found id: ""
	I0605 18:13:54.668578  517520 logs.go:284] 1 containers: [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc]
	I0605 18:13:54.668634  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:13:54.673517  517520 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 18:13:54.673584  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 18:13:54.719663  517520 cri.go:88] found id: ""
	I0605 18:13:54.719687  517520 logs.go:284] 0 containers: []
	W0605 18:13:54.719695  517520 logs.go:286] No container was found matching "etcd"
	I0605 18:13:54.719702  517520 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 18:13:54.719789  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 18:13:54.765404  517520 cri.go:88] found id: ""
	I0605 18:13:54.765425  517520 logs.go:284] 0 containers: []
	W0605 18:13:54.765432  517520 logs.go:286] No container was found matching "coredns"
	I0605 18:13:54.765439  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 18:13:54.765500  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 18:13:54.826008  517520 cri.go:88] found id: "f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:13:54.826030  517520 cri.go:88] found id: ""
	I0605 18:13:54.826045  517520 logs.go:284] 1 containers: [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe]
	I0605 18:13:54.826121  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:13:54.831385  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 18:13:54.831570  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 18:13:54.876919  517520 cri.go:88] found id: ""
	I0605 18:13:54.876941  517520 logs.go:284] 0 containers: []
	W0605 18:13:54.876949  517520 logs.go:286] No container was found matching "kube-proxy"
	I0605 18:13:54.876955  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 18:13:54.877018  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 18:13:54.922739  517520 cri.go:88] found id: "9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:13:54.922760  517520 cri.go:88] found id: ""
	I0605 18:13:54.922767  517520 logs.go:284] 1 containers: [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20]
	I0605 18:13:54.922822  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:13:54.927730  517520 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 18:13:54.927812  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 18:13:54.980248  517520 cri.go:88] found id: ""
	I0605 18:13:54.980274  517520 logs.go:284] 0 containers: []
	W0605 18:13:54.980289  517520 logs.go:286] No container was found matching "kindnet"
	I0605 18:13:54.980295  517520 cri.go:53] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0605 18:13:54.980359  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0605 18:13:55.041237  517520 cri.go:88] found id: ""
	I0605 18:13:55.041276  517520 logs.go:284] 0 containers: []
	W0605 18:13:55.041285  517520 logs.go:286] No container was found matching "storage-provisioner"
	I0605 18:13:55.041296  517520 logs.go:123] Gathering logs for container status ...
	I0605 18:13:55.041338  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 18:13:55.105750  517520 logs.go:123] Gathering logs for kubelet ...
	I0605 18:13:55.105786  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 18:13:55.244917  517520 logs.go:123] Gathering logs for dmesg ...
	I0605 18:13:55.244958  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 18:13:55.269750  517520 logs.go:123] Gathering logs for describe nodes ...
	I0605 18:13:55.269785  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0605 18:13:55.356470  517520 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0605 18:13:55.356497  517520 logs.go:123] Gathering logs for kube-apiserver [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc] ...
	I0605 18:13:55.356532  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:13:55.441214  517520 logs.go:123] Gathering logs for kube-scheduler [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe] ...
	I0605 18:13:55.441244  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:13:55.552895  517520 logs.go:123] Gathering logs for kube-controller-manager [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20] ...
	I0605 18:13:55.552937  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:13:55.604713  517520 logs.go:123] Gathering logs for CRI-O ...
	I0605 18:13:55.604746  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 18:13:56.635673  534165 machine.go:88] provisioning docker machine ...
	I0605 18:13:56.635706  534165 ubuntu.go:169] provisioning hostname "pause-845789"
	I0605 18:13:56.635827  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:56.660656  534165 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:56.661204  534165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I0605 18:13:56.661223  534165 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-845789 && echo "pause-845789" | sudo tee /etc/hostname
	I0605 18:13:56.827776  534165 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-845789
	
	I0605 18:13:56.827866  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:56.851641  534165 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:56.852246  534165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I0605 18:13:56.852277  534165 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-845789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-845789/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-845789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0605 18:13:56.994052  534165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0605 18:13:56.994078  534165 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16634-402421/.minikube CaCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16634-402421/.minikube}
	I0605 18:13:56.994114  534165 ubuntu.go:177] setting up certificates
	I0605 18:13:56.994125  534165 provision.go:83] configureAuth start
	I0605 18:13:56.994193  534165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845789
	I0605 18:13:57.018499  534165 provision.go:138] copyHostCerts
	I0605 18:13:57.018595  534165 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem, removing ...
	I0605 18:13:57.018604  534165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem
	I0605 18:13:57.018684  534165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/ca.pem (1082 bytes)
	I0605 18:13:57.018782  534165 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem, removing ...
	I0605 18:13:57.018787  534165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem
	I0605 18:13:57.018815  534165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/cert.pem (1123 bytes)
	I0605 18:13:57.018867  534165 exec_runner.go:144] found /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem, removing ...
	I0605 18:13:57.018871  534165 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem
	I0605 18:13:57.018895  534165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16634-402421/.minikube/key.pem (1675 bytes)
	I0605 18:13:57.018937  534165 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem org=jenkins.pause-845789 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-845789]
	I0605 18:13:57.805381  534165 provision.go:172] copyRemoteCerts
	I0605 18:13:57.805471  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0605 18:13:57.805528  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:57.830171  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:13:57.932249  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0605 18:13:57.962633  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0605 18:13:57.994556  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0605 18:13:58.027378  534165 provision.go:86] duration metric: configureAuth took 1.033206632s
	I0605 18:13:58.027410  534165 ubuntu.go:193] setting minikube options for container-runtime
	I0605 18:13:58.027674  534165 config.go:182] Loaded profile config "pause-845789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:13:58.027840  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:13:58.047198  534165 main.go:141] libmachine: Using SSH client type: native
	I0605 18:13:58.047636  534165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39eaa0] 0x3a1430 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I0605 18:13:58.047657  534165 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0605 18:13:58.158136  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:13:58.158524  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:13:58.158569  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 18:13:58.158627  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 18:13:58.214844  517520 cri.go:88] found id: "7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:13:58.214862  517520 cri.go:88] found id: ""
	I0605 18:13:58.214869  517520 logs.go:284] 1 containers: [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc]
	I0605 18:13:58.214925  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:13:58.220867  517520 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 18:13:58.220939  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 18:13:58.328969  517520 cri.go:88] found id: ""
	I0605 18:13:58.328990  517520 logs.go:284] 0 containers: []
	W0605 18:13:58.328997  517520 logs.go:286] No container was found matching "etcd"
	I0605 18:13:58.329004  517520 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 18:13:58.329079  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 18:13:58.387503  517520 cri.go:88] found id: ""
	I0605 18:13:58.387526  517520 logs.go:284] 0 containers: []
	W0605 18:13:58.387533  517520 logs.go:286] No container was found matching "coredns"
	I0605 18:13:58.387539  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 18:13:58.387612  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 18:13:58.433324  517520 cri.go:88] found id: "f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:13:58.433344  517520 cri.go:88] found id: ""
	I0605 18:13:58.433351  517520 logs.go:284] 1 containers: [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe]
	I0605 18:13:58.433409  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:13:58.438356  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 18:13:58.438445  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 18:13:58.483948  517520 cri.go:88] found id: ""
	I0605 18:13:58.483970  517520 logs.go:284] 0 containers: []
	W0605 18:13:58.483978  517520 logs.go:286] No container was found matching "kube-proxy"
	I0605 18:13:58.483985  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 18:13:58.484047  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 18:13:58.528632  517520 cri.go:88] found id: "9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:13:58.528653  517520 cri.go:88] found id: ""
	I0605 18:13:58.528660  517520 logs.go:284] 1 containers: [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20]
	I0605 18:13:58.528718  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:13:58.533564  517520 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 18:13:58.533640  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 18:13:58.581784  517520 cri.go:88] found id: ""
	I0605 18:13:58.581806  517520 logs.go:284] 0 containers: []
	W0605 18:13:58.581813  517520 logs.go:286] No container was found matching "kindnet"
	I0605 18:13:58.581820  517520 cri.go:53] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0605 18:13:58.581921  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0605 18:13:58.624518  517520 cri.go:88] found id: ""
	I0605 18:13:58.624576  517520 logs.go:284] 0 containers: []
	W0605 18:13:58.624590  517520 logs.go:286] No container was found matching "storage-provisioner"
	I0605 18:13:58.624600  517520 logs.go:123] Gathering logs for container status ...
	I0605 18:13:58.624613  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 18:13:58.673658  517520 logs.go:123] Gathering logs for kubelet ...
	I0605 18:13:58.673687  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 18:13:58.806851  517520 logs.go:123] Gathering logs for dmesg ...
	I0605 18:13:58.806890  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 18:13:58.829541  517520 logs.go:123] Gathering logs for describe nodes ...
	I0605 18:13:58.829579  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0605 18:13:58.920026  517520 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0605 18:13:58.920052  517520 logs.go:123] Gathering logs for kube-apiserver [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc] ...
	I0605 18:13:58.920064  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:13:58.972604  517520 logs.go:123] Gathering logs for kube-scheduler [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe] ...
	I0605 18:13:58.972636  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:13:59.078933  517520 logs.go:123] Gathering logs for kube-controller-manager [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20] ...
	I0605 18:13:59.078969  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:13:59.121751  517520 logs.go:123] Gathering logs for CRI-O ...
	I0605 18:13:59.121778  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 18:14:01.673496  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:14:01.673901  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:14:01.673945  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 18:14:01.674009  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 18:14:01.718691  517520 cri.go:88] found id: "7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:01.718711  517520 cri.go:88] found id: ""
	I0605 18:14:01.718718  517520 logs.go:284] 1 containers: [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc]
	I0605 18:14:01.718773  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:01.723348  517520 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 18:14:01.723438  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 18:14:01.768515  517520 cri.go:88] found id: ""
	I0605 18:14:01.768537  517520 logs.go:284] 0 containers: []
	W0605 18:14:01.768544  517520 logs.go:286] No container was found matching "etcd"
	I0605 18:14:01.768554  517520 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 18:14:01.768615  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 18:14:01.817523  517520 cri.go:88] found id: ""
	I0605 18:14:01.817545  517520 logs.go:284] 0 containers: []
	W0605 18:14:01.817553  517520 logs.go:286] No container was found matching "coredns"
	I0605 18:14:01.817560  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 18:14:01.817630  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 18:14:01.863032  517520 cri.go:88] found id: "f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:01.863055  517520 cri.go:88] found id: ""
	I0605 18:14:01.863062  517520 logs.go:284] 1 containers: [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe]
	I0605 18:14:01.863118  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:01.867977  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 18:14:01.868099  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 18:14:01.916515  517520 cri.go:88] found id: ""
	I0605 18:14:01.916610  517520 logs.go:284] 0 containers: []
	W0605 18:14:01.916655  517520 logs.go:286] No container was found matching "kube-proxy"
	I0605 18:14:01.916692  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 18:14:01.916820  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 18:14:01.964971  517520 cri.go:88] found id: "9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:01.965035  517520 cri.go:88] found id: ""
	I0605 18:14:01.965088  517520 logs.go:284] 1 containers: [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20]
	I0605 18:14:01.965188  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:01.970379  517520 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 18:14:01.970452  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 18:14:02.023731  517520 cri.go:88] found id: ""
	I0605 18:14:02.023759  517520 logs.go:284] 0 containers: []
	W0605 18:14:02.023768  517520 logs.go:286] No container was found matching "kindnet"
	I0605 18:14:02.023775  517520 cri.go:53] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0605 18:14:02.023843  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0605 18:14:02.069412  517520 cri.go:88] found id: ""
	I0605 18:14:02.069441  517520 logs.go:284] 0 containers: []
	W0605 18:14:02.069451  517520 logs.go:286] No container was found matching "storage-provisioner"
	I0605 18:14:02.069461  517520 logs.go:123] Gathering logs for kube-controller-manager [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20] ...
	I0605 18:14:02.069475  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:02.115156  517520 logs.go:123] Gathering logs for CRI-O ...
	I0605 18:14:02.115185  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 18:14:02.166825  517520 logs.go:123] Gathering logs for container status ...
	I0605 18:14:02.166860  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 18:14:02.214922  517520 logs.go:123] Gathering logs for kubelet ...
	I0605 18:14:02.214959  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 18:14:03.542393  534165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0605 18:14:03.542418  534165 machine.go:91] provisioned docker machine in 6.906724296s
	I0605 18:14:03.542429  534165 start.go:300] post-start starting for "pause-845789" (driver="docker")
	I0605 18:14:03.542436  534165 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0605 18:14:03.542502  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0605 18:14:03.542549  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.562739  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:03.671703  534165 ssh_runner.go:195] Run: cat /etc/os-release
	I0605 18:14:03.676225  534165 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0605 18:14:03.676268  534165 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0605 18:14:03.676285  534165 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0605 18:14:03.676292  534165 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0605 18:14:03.676308  534165 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/addons for local assets ...
	I0605 18:14:03.676373  534165 filesync.go:126] Scanning /home/jenkins/minikube-integration/16634-402421/.minikube/files for local assets ...
	I0605 18:14:03.676462  534165 filesync.go:149] local asset: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem -> 4078132.pem in /etc/ssl/certs
	I0605 18:14:03.676574  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0605 18:14:03.688093  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:14:03.718267  534165 start.go:303] post-start completed in 175.823334ms
	I0605 18:14:03.718350  534165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 18:14:03.718393  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.737392  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:03.834822  534165 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0605 18:14:03.841415  534165 fix.go:57] fixHost completed within 7.228929022s
	I0605 18:14:03.841441  534165 start.go:83] releasing machines lock for "pause-845789", held for 7.228985547s
	I0605 18:14:03.841525  534165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-845789
	I0605 18:14:03.860268  534165 ssh_runner.go:195] Run: cat /version.json
	I0605 18:14:03.860279  534165 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0605 18:14:03.860327  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.860329  534165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-845789
	I0605 18:14:03.889402  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:03.904059  534165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/pause-845789/id_rsa Username:docker}
	I0605 18:14:04.129785  534165 ssh_runner.go:195] Run: systemctl --version
	I0605 18:14:04.135993  534165 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0605 18:14:04.287895  534165 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0605 18:14:04.296503  534165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:14:04.311964  534165 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0605 18:14:04.312136  534165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0605 18:14:04.331537  534165 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0605 18:14:04.331625  534165 start.go:481] detecting cgroup driver to use...
	I0605 18:14:04.331685  534165 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0605 18:14:04.331770  534165 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0605 18:14:04.354092  534165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0605 18:14:04.374054  534165 docker.go:193] disabling cri-docker service (if available) ...
	I0605 18:14:04.374213  534165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0605 18:14:04.416882  534165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0605 18:14:04.505815  534165 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0605 18:14:04.893109  534165 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0605 18:14:05.263372  534165 docker.go:209] disabling docker service ...
	I0605 18:14:05.263494  534165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0605 18:14:05.302641  534165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0605 18:14:05.332231  534165 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0605 18:14:05.553230  534165 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0605 18:14:05.816090  534165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0605 18:14:05.868579  534165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0605 18:14:05.946631  534165 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0605 18:14:05.946742  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:05.983326  534165 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0605 18:14:05.983432  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:06.021664  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:06.040303  534165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0605 18:14:06.089592  534165 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0605 18:14:06.131352  534165 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0605 18:14:06.167713  534165 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0605 18:14:06.206901  534165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0605 18:14:02.344994  517520 logs.go:123] Gathering logs for dmesg ...
	I0605 18:14:02.345031  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 18:14:02.366793  517520 logs.go:123] Gathering logs for describe nodes ...
	I0605 18:14:02.366821  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0605 18:14:02.446627  517520 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0605 18:14:02.446650  517520 logs.go:123] Gathering logs for kube-apiserver [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc] ...
	I0605 18:14:02.446662  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:02.512913  517520 logs.go:123] Gathering logs for kube-scheduler [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe] ...
	I0605 18:14:02.512981  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:05.119183  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:14:05.119615  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:14:05.119681  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 18:14:05.119739  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 18:14:05.197558  517520 cri.go:88] found id: "7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:05.197577  517520 cri.go:88] found id: ""
	I0605 18:14:05.197584  517520 logs.go:284] 1 containers: [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc]
	I0605 18:14:05.197639  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:05.205914  517520 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 18:14:05.205985  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 18:14:05.285297  517520 cri.go:88] found id: ""
	I0605 18:14:05.285317  517520 logs.go:284] 0 containers: []
	W0605 18:14:05.285325  517520 logs.go:286] No container was found matching "etcd"
	I0605 18:14:05.285332  517520 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 18:14:05.285389  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 18:14:05.352980  517520 cri.go:88] found id: ""
	I0605 18:14:05.353007  517520 logs.go:284] 0 containers: []
	W0605 18:14:05.353015  517520 logs.go:286] No container was found matching "coredns"
	I0605 18:14:05.353022  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 18:14:05.353098  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 18:14:05.434370  517520 cri.go:88] found id: "f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:05.434392  517520 cri.go:88] found id: ""
	I0605 18:14:05.434400  517520 logs.go:284] 1 containers: [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe]
	I0605 18:14:05.434458  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:05.440389  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 18:14:05.440466  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 18:14:05.578312  517520 cri.go:88] found id: ""
	I0605 18:14:05.578332  517520 logs.go:284] 0 containers: []
	W0605 18:14:05.578339  517520 logs.go:286] No container was found matching "kube-proxy"
	I0605 18:14:05.578346  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 18:14:05.578404  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 18:14:05.676009  517520 cri.go:88] found id: "9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:05.676029  517520 cri.go:88] found id: ""
	I0605 18:14:05.676036  517520 logs.go:284] 1 containers: [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20]
	I0605 18:14:05.676096  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:05.681291  517520 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 18:14:05.681362  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 18:14:05.741704  517520 cri.go:88] found id: ""
	I0605 18:14:05.741776  517520 logs.go:284] 0 containers: []
	W0605 18:14:05.741792  517520 logs.go:286] No container was found matching "kindnet"
	I0605 18:14:05.741798  517520 cri.go:53] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0605 18:14:05.741917  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0605 18:14:05.827887  517520 cri.go:88] found id: ""
	I0605 18:14:05.827907  517520 logs.go:284] 0 containers: []
	W0605 18:14:05.827972  517520 logs.go:286] No container was found matching "storage-provisioner"
	I0605 18:14:05.827987  517520 logs.go:123] Gathering logs for describe nodes ...
	I0605 18:14:05.828031  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0605 18:14:05.989985  517520 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0605 18:14:05.990003  517520 logs.go:123] Gathering logs for kube-apiserver [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc] ...
	I0605 18:14:05.990015  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:06.087333  517520 logs.go:123] Gathering logs for kube-scheduler [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe] ...
	I0605 18:14:06.087368  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:06.286596  517520 logs.go:123] Gathering logs for kube-controller-manager [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20] ...
	I0605 18:14:06.286639  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:06.368836  517520 logs.go:123] Gathering logs for CRI-O ...
	I0605 18:14:06.368870  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 18:14:06.459051  517520 logs.go:123] Gathering logs for container status ...
	I0605 18:14:06.459141  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 18:14:06.570731  517520 logs.go:123] Gathering logs for kubelet ...
	I0605 18:14:06.570766  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 18:14:06.709584  517520 logs.go:123] Gathering logs for dmesg ...
	I0605 18:14:06.709624  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 18:14:06.514724  534165 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0605 18:14:09.233020  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:14:09.233398  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:14:09.233437  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 18:14:09.233488  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 18:14:09.278541  517520 cri.go:88] found id: "7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:09.278565  517520 cri.go:88] found id: ""
	I0605 18:14:09.278574  517520 logs.go:284] 1 containers: [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc]
	I0605 18:14:09.278636  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:09.283325  517520 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 18:14:09.283422  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 18:14:09.327395  517520 cri.go:88] found id: ""
	I0605 18:14:09.327415  517520 logs.go:284] 0 containers: []
	W0605 18:14:09.327424  517520 logs.go:286] No container was found matching "etcd"
	I0605 18:14:09.327430  517520 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 18:14:09.327488  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 18:14:09.372339  517520 cri.go:88] found id: ""
	I0605 18:14:09.372360  517520 logs.go:284] 0 containers: []
	W0605 18:14:09.372368  517520 logs.go:286] No container was found matching "coredns"
	I0605 18:14:09.372374  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 18:14:09.372433  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 18:14:09.415164  517520 cri.go:88] found id: "f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:09.415184  517520 cri.go:88] found id: ""
	I0605 18:14:09.415192  517520 logs.go:284] 1 containers: [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe]
	I0605 18:14:09.415252  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:09.419885  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 18:14:09.419979  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 18:14:09.467969  517520 cri.go:88] found id: ""
	I0605 18:14:09.467991  517520 logs.go:284] 0 containers: []
	W0605 18:14:09.468000  517520 logs.go:286] No container was found matching "kube-proxy"
	I0605 18:14:09.468006  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 18:14:09.468069  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 18:14:09.511246  517520 cri.go:88] found id: "9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:09.511267  517520 cri.go:88] found id: ""
	I0605 18:14:09.511274  517520 logs.go:284] 1 containers: [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20]
	I0605 18:14:09.511346  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:09.516168  517520 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 18:14:09.516248  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 18:14:09.561835  517520 cri.go:88] found id: ""
	I0605 18:14:09.561858  517520 logs.go:284] 0 containers: []
	W0605 18:14:09.561866  517520 logs.go:286] No container was found matching "kindnet"
	I0605 18:14:09.561872  517520 cri.go:53] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0605 18:14:09.561933  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0605 18:14:09.605682  517520 cri.go:88] found id: ""
	I0605 18:14:09.605705  517520 logs.go:284] 0 containers: []
	W0605 18:14:09.605713  517520 logs.go:286] No container was found matching "storage-provisioner"
	I0605 18:14:09.605722  517520 logs.go:123] Gathering logs for kubelet ...
	I0605 18:14:09.605735  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 18:14:09.735120  517520 logs.go:123] Gathering logs for dmesg ...
	I0605 18:14:09.735158  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 18:14:09.757358  517520 logs.go:123] Gathering logs for describe nodes ...
	I0605 18:14:09.757531  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0605 18:14:09.837380  517520 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0605 18:14:09.837446  517520 logs.go:123] Gathering logs for kube-apiserver [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc] ...
	I0605 18:14:09.837472  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:09.905478  517520 logs.go:123] Gathering logs for kube-scheduler [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe] ...
	I0605 18:14:09.905509  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:10.027179  517520 logs.go:123] Gathering logs for kube-controller-manager [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20] ...
	I0605 18:14:10.027217  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:10.076244  517520 logs.go:123] Gathering logs for CRI-O ...
	I0605 18:14:10.076273  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 18:14:10.133069  517520 logs.go:123] Gathering logs for container status ...
	I0605 18:14:10.133105  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 18:14:16.139807  534165 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.625046719s)
	I0605 18:14:16.139835  534165 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0605 18:14:16.139890  534165 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0605 18:14:16.146463  534165 start.go:549] Will wait 60s for crictl version
	I0605 18:14:16.146535  534165 ssh_runner.go:195] Run: which crictl
	I0605 18:14:16.152202  534165 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0605 18:14:16.215777  534165 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0605 18:14:16.215865  534165 ssh_runner.go:195] Run: crio --version
	I0605 18:14:16.265299  534165 ssh_runner.go:195] Run: crio --version
	I0605 18:14:16.314486  534165 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0605 18:14:16.317520  534165 cli_runner.go:164] Run: docker network inspect pause-845789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0605 18:14:12.688035  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:14:12.688480  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:14:12.688545  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0605 18:14:12.688621  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0605 18:14:12.736567  517520 cri.go:88] found id: "7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:12.736641  517520 cri.go:88] found id: ""
	I0605 18:14:12.736676  517520 logs.go:284] 1 containers: [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc]
	I0605 18:14:12.736779  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:12.741557  517520 cri.go:53] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0605 18:14:12.741630  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0605 18:14:12.785832  517520 cri.go:88] found id: ""
	I0605 18:14:12.785854  517520 logs.go:284] 0 containers: []
	W0605 18:14:12.785864  517520 logs.go:286] No container was found matching "etcd"
	I0605 18:14:12.785872  517520 cri.go:53] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0605 18:14:12.785942  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0605 18:14:12.831024  517520 cri.go:88] found id: ""
	I0605 18:14:12.831096  517520 logs.go:284] 0 containers: []
	W0605 18:14:12.831117  517520 logs.go:286] No container was found matching "coredns"
	I0605 18:14:12.831138  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0605 18:14:12.831226  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0605 18:14:12.875259  517520 cri.go:88] found id: "f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:12.875323  517520 cri.go:88] found id: ""
	I0605 18:14:12.875356  517520 logs.go:284] 1 containers: [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe]
	I0605 18:14:12.875444  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:12.880519  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0605 18:14:12.880617  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0605 18:14:12.924716  517520 cri.go:88] found id: ""
	I0605 18:14:12.924738  517520 logs.go:284] 0 containers: []
	W0605 18:14:12.924746  517520 logs.go:286] No container was found matching "kube-proxy"
	I0605 18:14:12.924752  517520 cri.go:53] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0605 18:14:12.924812  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0605 18:14:12.974499  517520 cri.go:88] found id: "9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:12.974574  517520 cri.go:88] found id: ""
	I0605 18:14:12.974597  517520 logs.go:284] 1 containers: [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20]
	I0605 18:14:12.974676  517520 ssh_runner.go:195] Run: which crictl
	I0605 18:14:12.979508  517520 cri.go:53] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0605 18:14:12.979599  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0605 18:14:13.026700  517520 cri.go:88] found id: ""
	I0605 18:14:13.026723  517520 logs.go:284] 0 containers: []
	W0605 18:14:13.026731  517520 logs.go:286] No container was found matching "kindnet"
	I0605 18:14:13.026738  517520 cri.go:53] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0605 18:14:13.026824  517520 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0605 18:14:13.072432  517520 cri.go:88] found id: ""
	I0605 18:14:13.072458  517520 logs.go:284] 0 containers: []
	W0605 18:14:13.072466  517520 logs.go:286] No container was found matching "storage-provisioner"
	I0605 18:14:13.072476  517520 logs.go:123] Gathering logs for describe nodes ...
	I0605 18:14:13.072530  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0605 18:14:13.158019  517520 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0605 18:14:13.158041  517520 logs.go:123] Gathering logs for kube-apiserver [7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc] ...
	I0605 18:14:13.158053  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c8b484801c5247960eee337dc4ea07e0d6cc43fd385ac3f4fc914548c5e40fc"
	I0605 18:14:13.219384  517520 logs.go:123] Gathering logs for kube-scheduler [f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe] ...
	I0605 18:14:13.219417  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f09ec20aa55a798c1c3c391304a0c092f8ac2851d024b17d8bc5f24080e2fbbe"
	I0605 18:14:13.334814  517520 logs.go:123] Gathering logs for kube-controller-manager [9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20] ...
	I0605 18:14:13.334853  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8070b9e1df124662e1c3d4b08c3772d308eef3d398c93b38b72d6087c25e20"
	I0605 18:14:13.381552  517520 logs.go:123] Gathering logs for CRI-O ...
	I0605 18:14:13.381583  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0605 18:14:13.434595  517520 logs.go:123] Gathering logs for container status ...
	I0605 18:14:13.434632  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0605 18:14:13.483778  517520 logs.go:123] Gathering logs for kubelet ...
	I0605 18:14:13.483806  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0605 18:14:13.619154  517520 logs.go:123] Gathering logs for dmesg ...
	I0605 18:14:13.619194  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0605 18:14:16.142173  517520 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0605 18:14:16.142571  517520 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0605 18:14:16.142643  517520 kubeadm.go:640] restartCluster took 4m3.619765286s
	W0605 18:14:16.142723  517520 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0605 18:14:16.142759  517520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0605 18:14:16.335674  534165 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0605 18:14:16.340802  534165 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 18:14:16.340879  534165 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 18:14:16.390311  534165 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 18:14:16.390330  534165 crio.go:415] Images already preloaded, skipping extraction
	I0605 18:14:16.390388  534165 ssh_runner.go:195] Run: sudo crictl images --output json
	I0605 18:14:16.434732  534165 crio.go:496] all images are preloaded for cri-o runtime.
	I0605 18:14:16.434755  534165 cache_images.go:84] Images are preloaded, skipping loading
	I0605 18:14:16.434834  534165 ssh_runner.go:195] Run: crio config
	I0605 18:14:16.493681  534165 cni.go:84] Creating CNI manager for ""
	I0605 18:14:16.493716  534165 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:14:16.493728  534165 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0605 18:14:16.493764  534165 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-845789 NodeName:pause-845789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0605 18:14:16.493977  534165 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-845789"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0605 18:14:16.494086  534165 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-845789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0605 18:14:16.494189  534165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0605 18:14:16.505882  534165 binaries.go:44] Found k8s binaries, skipping transfer
	I0605 18:14:16.505958  534165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0605 18:14:16.516915  534165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0605 18:14:16.540373  534165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0605 18:14:16.563154  534165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0605 18:14:16.586542  534165 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0605 18:14:16.591444  534165 certs.go:56] Setting up /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789 for IP: 192.168.76.2
	I0605 18:14:16.591479  534165 certs.go:190] acquiring lock for shared ca certs: {Name:mkcde6289d01a116d789395fcd8dd485889e790f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0605 18:14:16.591615  534165 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key
	I0605 18:14:16.591665  534165 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key
	I0605 18:14:16.591751  534165 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/client.key
	I0605 18:14:16.591822  534165 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/apiserver.key.31bdca25
	I0605 18:14:16.591868  534165 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/proxy-client.key
	I0605 18:14:16.592010  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem (1338 bytes)
	W0605 18:14:16.592044  534165 certs.go:433] ignoring /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813_empty.pem, impossibly tiny 0 bytes
	I0605 18:14:16.592060  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca-key.pem (1679 bytes)
	I0605 18:14:16.592088  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/ca.pem (1082 bytes)
	I0605 18:14:16.592116  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/cert.pem (1123 bytes)
	I0605 18:14:16.592147  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/certs/home/jenkins/minikube-integration/16634-402421/.minikube/certs/key.pem (1675 bytes)
	I0605 18:14:16.592196  534165 certs.go:437] found cert: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem (1708 bytes)
	I0605 18:14:16.592832  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0605 18:14:16.623762  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0605 18:14:16.661920  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0605 18:14:16.697772  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/pause-845789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0605 18:14:16.731408  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0605 18:14:16.765072  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0605 18:14:16.797709  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0605 18:14:16.833084  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0605 18:14:16.867571  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/ssl/certs/4078132.pem --> /usr/share/ca-certificates/4078132.pem (1708 bytes)
	I0605 18:14:16.903190  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0605 18:14:16.935230  534165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16634-402421/.minikube/certs/407813.pem --> /usr/share/ca-certificates/407813.pem (1338 bytes)
	I0605 18:14:16.966804  534165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0605 18:14:16.991740  534165 ssh_runner.go:195] Run: openssl version
	I0605 18:14:17.000624  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4078132.pem && ln -fs /usr/share/ca-certificates/4078132.pem /etc/ssl/certs/4078132.pem"
	I0605 18:14:17.015358  534165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4078132.pem
	I0605 18:14:17.020925  534165 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun  5 17:39 /usr/share/ca-certificates/4078132.pem
	I0605 18:14:17.021018  534165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4078132.pem
	I0605 18:14:17.032511  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4078132.pem /etc/ssl/certs/3ec20f2e.0"
	I0605 18:14:17.044089  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0605 18:14:17.057870  534165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:14:17.063568  534165 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun  5 17:31 /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:14:17.063643  534165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0605 18:14:17.073249  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0605 18:14:17.085138  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407813.pem && ln -fs /usr/share/ca-certificates/407813.pem /etc/ssl/certs/407813.pem"
	I0605 18:14:17.097990  534165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407813.pem
	I0605 18:14:17.103818  534165 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun  5 17:39 /usr/share/ca-certificates/407813.pem
	I0605 18:14:17.103912  534165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407813.pem
	I0605 18:14:17.113730  534165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407813.pem /etc/ssl/certs/51391683.0"
	I0605 18:14:17.125505  534165 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0605 18:14:17.130934  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0605 18:14:17.140970  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0605 18:14:17.150717  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0605 18:14:17.160039  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0605 18:14:17.170167  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0605 18:14:17.179450  534165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0605 18:14:17.189020  534165 kubeadm.go:404] StartCluster: {Name:pause-845789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-845789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:14:17.189203  534165 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0605 18:14:17.189272  534165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0605 18:14:17.253500  534165 cri.go:88] found id: "f5667873af88719df6b5dedf9f489eb022c51271f2cd1b3eb469e0d4dacc97a7"
	I0605 18:14:17.253524  534165 cri.go:88] found id: "c9b4f7001f3c15228d03cea251b3deb8ed506d11399610450285f98d983fda93"
	I0605 18:14:17.253537  534165 cri.go:88] found id: "4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664"
	I0605 18:14:17.253542  534165 cri.go:88] found id: "71d3d8c961e5398097e4f6db56499fa661e52ecb42ce4529773a11baf8e4738c"
	I0605 18:14:17.253546  534165 cri.go:88] found id: "f4898fe3b98d8da5dd96757f14bd45d4596b39d01316f166c39623a20ca9c09e"
	I0605 18:14:17.253551  534165 cri.go:88] found id: "9d612546a5b5f10d19269ed6d4beb3cf134ca893d0c7b8c4eac3c5d0535e98fb"
	I0605 18:14:17.253555  534165 cri.go:88] found id: "0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e"
	I0605 18:14:17.253559  534165 cri.go:88] found id: "4d5bd8c85ccb6172975ab10caa1da5a458e7ccf843a84a1de3429849770fbbe8"
	I0605 18:14:17.253563  534165 cri.go:88] found id: "85f49c943fc9ff6451d80220b5caf2e551d9aaf68d9e1a871c1d949465da7dde"
	I0605 18:14:17.253576  534165 cri.go:88] found id: "b3276bd40b155a7e405949bcc44f28d8f3463a34813f03a65d81e6b2ed7c635c"
	I0605 18:14:17.253585  534165 cri.go:88] found id: "2d74c55ee8d9ad65c7c364cb88ed89db229e28095c42b40122f8af0df70cd7e5"
	I0605 18:14:17.253590  534165 cri.go:88] found id: "8914d3d8c576306c35e741da6ce526747cf90175f3bb570e9f30860e67221a94"
	I0605 18:14:17.253594  534165 cri.go:88] found id: "6026c38e08eb3d1b7d8d26bdc1ec195ad0ac869068f3d92df6b7f454615c3a5c"
	I0605 18:14:17.253611  534165 cri.go:88] found id: "099ec7a9893d2275206ff3bf58acd010c8000628d21e04f52a37e4e26135ee5b"
	I0605 18:14:17.253617  534165 cri.go:88] found id: "c4e6052c8ddbebdc10f9279a4c704e7b5c504411d2b484170b5233330893d115"
	I0605 18:14:17.253625  534165 cri.go:88] found id: "742da03ee25f724e2ebc7481c76123716fea92ba9ea1339cfa37e54549863f52"
	I0605 18:14:17.253629  534165 cri.go:88] found id: ""
	I0605 18:14:17.253690  534165 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.895116030Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.895133925Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.911685057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.911721480Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.941174711Z" level=info msg="Stopping pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=6dadb2a5-3f10-4617-9238-44fb1a9d5962 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.941421652Z" level=info msg="Got pod network &{Name:coredns-5d78c9869d-fhpqv Namespace:kube-system ID:3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f UID:30efb721-d346-4e6e-b034-3e3ec690f8e8 NetNS:/var/run/netns/2f3d13eb-5fb4-42e4-9049-3577ac586d7b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.941560500Z" level=info msg="Deleting pod kube-system_coredns-5d78c9869d-fhpqv from CNI network \"kindnet\" (type=ptp)"
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.984893246Z" level=info msg="Stopped pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=6dadb2a5-3f10-4617-9238-44fb1a9d5962 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:37 pause-845789 crio[2699]: time="2023-06-05 18:14:37.608927878Z" level=info msg="Removing container: 0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e" id=c532cb0e-36e7-48af-ab91-0adcf28977ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 18:14:37 pause-845789 crio[2699]: time="2023-06-05 18:14:37.638536135Z" level=info msg="Removed container 0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e: kube-system/coredns-5d78c9869d-fhpqv/coredns" id=c532cb0e-36e7-48af-ab91-0adcf28977ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.033364694Z" level=info msg="Stopping pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=890b88f6-0466-4e97-a75b-11c559bb1c69 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.033416493Z" level=info msg="Stopped pod sandbox (already stopped): 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=890b88f6-0466-4e97-a75b-11c559bb1c69 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.033871057Z" level=info msg="Removing pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=9676990c-40c7-4e6c-8725-12a76515eb91 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.042160765Z" level=info msg="Removed pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=9676990c-40c7-4e6c-8725-12a76515eb91 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.210111087Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=ab917904-de9d-490f-818e-9a59074bdc87 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.210394213Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ab917904-de9d-490f-818e-9a59074bdc87 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.211572461Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=b3a42ddc-e906-4a75-8246-921d7a2fe013 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.211773594Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3a42ddc-e906-4a75-8246-921d7a2fe013 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.213037848Z" level=info msg="Creating container: kube-system/coredns-5d78c9869d-lkkp2/coredns" id=8d2c7e65-3f54-4934-92d4-15895d3a135e name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.213142340Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.226403642Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4a55f2fd1b6b2613539dced2c2cf3869a5919ac7c322a8896851b37fb51d5bac/merged/etc/passwd: no such file or directory"
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.226460356Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4a55f2fd1b6b2613539dced2c2cf3869a5919ac7c322a8896851b37fb51d5bac/merged/etc/group: no such file or directory"
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.295287725Z" level=info msg="Created container a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5: kube-system/coredns-5d78c9869d-lkkp2/coredns" id=8d2c7e65-3f54-4934-92d4-15895d3a135e name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.296333904Z" level=info msg="Starting container: a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5" id=0d37683c-c6d5-493f-b809-4fdbdc1b7cde name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.311269749Z" level=info msg="Started container" PID=3496 containerID=a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5 description=kube-system/coredns-5d78c9869d-lkkp2/coredns id=0d37683c-c6d5-493f-b809-4fdbdc1b7cde name=/runtime.v1.RuntimeService/StartContainer sandboxID=49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8ee47583a79d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 seconds ago       Running             coredns                   2                   49ed23ca3133e       coredns-5d78c9869d-lkkp2
	1de7fa90fe9ac       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   24 seconds ago      Running             kube-controller-manager   2                   5f925fea82d08       kube-controller-manager-pause-845789
	9f4e5a1f1b169       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   24 seconds ago      Running             kube-proxy                2                   5f6c63a8a4eee       kube-proxy-hkn5d
	e523467def4fe       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   24 seconds ago      Running             kube-scheduler            2                   c173e49d19449       kube-scheduler-pause-845789
	1b213002395e7       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   24 seconds ago      Running             kindnet-cni               2                   36e126b3e99c1       kindnet-qfl6w
	9c9eeb2c7e3ec       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   24 seconds ago      Running             kube-apiserver            2                   13f1aadfbeac9       kube-apiserver-pause-845789
	b656bdc2a1d92       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   24 seconds ago      Running             etcd                      2                   df76f33ae89d6       etcd-pause-845789
	f5667873af887       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   37 seconds ago      Exited              kindnet-cni               1                   36e126b3e99c1       kindnet-qfl6w
	c9b4f7001f3c1       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   37 seconds ago      Exited              etcd                      1                   df76f33ae89d6       etcd-pause-845789
	4fa0d894ad162       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   37 seconds ago      Exited              coredns                   1                   49ed23ca3133e       coredns-5d78c9869d-lkkp2
	71d3d8c961e53       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   37 seconds ago      Exited              kube-scheduler            1                   c173e49d19449       kube-scheduler-pause-845789
	f4898fe3b98d8       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   37 seconds ago      Exited              kube-proxy                1                   5f6c63a8a4eee       kube-proxy-hkn5d
	9d612546a5b5f       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   37 seconds ago      Exited              kube-apiserver            1                   13f1aadfbeac9       kube-apiserver-pause-845789
	4d5bd8c85ccb6       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   37 seconds ago      Exited              kube-controller-manager   1                   5f925fea82d08       kube-controller-manager-pause-845789
	
	* 
	* ==> coredns [4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:58177 - 51918 "HINFO IN 2750685008957993427.78133201232048018. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014244974s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35274 - 4006 "HINFO IN 3462443190073411497.836027910425017106. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015182992s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-845789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-845789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=pause-845789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T18_13_39_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 18:13:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-845789
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 18:14:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-845789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfbe3c9adb2f42edbfff435b13af338f
	  System UUID:                e3eb05d2-752e-48b1-b77a-1aa1b8da647c
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-lkkp2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     51s
	  kube-system                 etcd-pause-845789                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kindnet-qfl6w                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      52s
	  kube-system                 kube-apiserver-pause-845789             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-controller-manager-pause-845789    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-proxy-hkn5d                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-scheduler-pause-845789             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node pause-845789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node pause-845789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node pause-845789 status is now: NodeHasSufficientPID
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x2 over 64s)  kubelet          Node pause-845789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x2 over 64s)  kubelet          Node pause-845789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x2 over 64s)  kubelet          Node pause-845789 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node pause-845789 event: Registered Node pause-845789 in Controller
	  Normal  NodeReady                49s                kubelet          Node pause-845789 status is now: NodeReady
	  Normal  RegisteredNode           6s                 node-controller  Node pause-845789 event: Registered Node pause-845789 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001075] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +0.004539] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000e785b4d1
	[  +0.001078] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000005f019a4a
	[  +0.001044] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +3.062644] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000061ba42e8
	[  +0.001124] FS-Cache: O-key=[8] 'd0d1c90000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001040] FS-Cache: N-key=[8] 'd0d1c90000000000'
	[  +0.324591] FS-Cache: Duplicate cookie detected
	[  +0.000707] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000004b485b91
	[  +0.001042] FS-Cache: O-key=[8] 'd6d1c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000003ad92423
	[  +0.001057] FS-Cache: N-key=[8] 'd6d1c90000000000'
	
	* 
	* ==> etcd [b656bdc2a1d922eb68300044a25bb91693982de4dc391a844ec8790ac84e5d25] <==
	* {"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:18.408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:18.423Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-05T18:14:18.444Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T18:14:18.454Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:18.454Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:18.454Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.669Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T18:14:19.670Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-06-05T18:14:19.671Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T18:14:19.673Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-05T18:14:19.669Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-845789 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-05T18:14:19.676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-05T18:14:19.676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [c9b4f7001f3c15228d03cea251b3deb8ed506d11399610450285f98d983fda93] <==
	* {"level":"info","ts":"2023-06-05T18:14:05.634Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"31.588994ms"}
	{"level":"info","ts":"2023-06-05T18:14:05.763Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-06-05T18:14:05.775Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","commit-index":436}
	{"level":"info","ts":"2023-06-05T18:14:05.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=()"}
	{"level":"info","ts":"2023-06-05T18:14:05.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became follower at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:05.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ea7e25599daad906 [peers: [], term: 2, commit: 436, applied: 0, lastindex: 436, lastterm: 2]"}
	{"level":"warn","ts":"2023-06-05T18:14:05.806Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-06-05T18:14:05.837Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":424}
	{"level":"info","ts":"2023-06-05T18:14:05.853Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"ea7e25599daad906","timeout":"7s"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:05.943Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:05.944Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:05.944Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T18:14:05.947Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> kernel <==
	*  18:14:42 up  2:56,  0 users,  load average: 4.54, 2.90, 2.29
	Linux pause-845789 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [1b213002395e708566f6f5b5a649065e0692eb8150048b9718e252ff3bc428c5] <==
	* I0605 18:14:18.063206       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0605 18:14:18.128471       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0605 18:14:18.129312       1 main.go:116] setting mtu 1500 for CNI 
	I0605 18:14:18.129676       1 main.go:146] kindnetd IP family: "ipv4"
	I0605 18:14:18.129840       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0605 18:14:18.471756       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0605 18:14:18.472125       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0605 18:14:23.867499       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0605 18:14:23.867788       1 main.go:227] handling current node
	I0605 18:14:33.883728       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0605 18:14:33.883761       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [f5667873af88719df6b5dedf9f489eb022c51271f2cd1b3eb469e0d4dacc97a7] <==
	* I0605 18:14:05.073105       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0605 18:14:05.073186       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0605 18:14:05.073365       1 main.go:116] setting mtu 1500 for CNI 
	I0605 18:14:05.073382       1 main.go:146] kindnetd IP family: "ipv4"
	I0605 18:14:05.073398       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0605 18:14:05.803028       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0605 18:14:05.803341       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [9c9eeb2c7e3ec57b2599949a59a0e4a094a494a361b9793dd0a9962db1cc3f1b] <==
	* I0605 18:14:23.427782       1 naming_controller.go:291] Starting NamingConditionController
	I0605 18:14:23.427801       1 establishing_controller.go:76] Starting EstablishingController
	I0605 18:14:23.427820       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0605 18:14:23.427832       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0605 18:14:23.427845       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0605 18:14:23.427880       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0605 18:14:23.440139       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0605 18:14:23.441588       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0605 18:14:23.441611       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0605 18:14:23.779697       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0605 18:14:23.816424       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0605 18:14:23.851501       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0605 18:14:23.851583       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0605 18:14:23.862331       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0605 18:14:23.862361       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0605 18:14:23.862457       1 cache.go:39] Caches are synced for autoregister controller
	I0605 18:14:23.862599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0605 18:14:23.875776       1 shared_informer.go:318] Caches are synced for configmaps
	I0605 18:14:23.875860       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0605 18:14:23.877264       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	E0605 18:14:23.897071       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0605 18:14:24.466412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0605 18:14:36.825559       1 controller.go:624] quota admission added evaluator for: endpoints
	I0605 18:14:36.877704       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0605 18:14:36.900429       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [9d612546a5b5f10d19269ed6d4beb3cf134ca893d0c7b8c4eac3c5d0535e98fb] <==
	* I0605 18:14:05.855806       1 server.go:551] external host was not specified, using 192.168.76.2
	I0605 18:14:05.869912       1 server.go:165] Version: v1.27.2
	I0605 18:14:05.871245       1 server.go:167] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	* 
	* ==> kube-controller-manager [1de7fa90fe9accbdc1d2f5695929320c8eace2374145be13c6ab58965c9a3dcd] <==
	* I0605 18:14:36.808067       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-845789"
	I0605 18:14:36.808889       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0605 18:14:36.808991       1 shared_informer.go:318] Caches are synced for expand
	I0605 18:14:36.809075       1 taint_manager.go:211] "Sending events to api server"
	I0605 18:14:36.809555       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0605 18:14:36.809627       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0605 18:14:36.808919       1 shared_informer.go:318] Caches are synced for stateful set
	I0605 18:14:36.810344       1 event.go:307] "Event occurred" object="pause-845789" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-845789 event: Registered Node pause-845789 in Controller"
	I0605 18:14:36.808930       1 shared_informer.go:318] Caches are synced for namespace
	I0605 18:14:36.808935       1 shared_informer.go:318] Caches are synced for daemon sets
	I0605 18:14:36.816911       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0605 18:14:36.820145       1 shared_informer.go:318] Caches are synced for persistent volume
	I0605 18:14:36.821825       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0605 18:14:36.821962       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0605 18:14:36.821979       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0605 18:14:36.829338       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0605 18:14:36.837923       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0605 18:14:36.853694       1 shared_informer.go:318] Caches are synced for attach detach
	I0605 18:14:36.892086       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 18:14:36.902858       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 18:14:36.919112       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0605 18:14:36.943002       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-fhpqv"
	I0605 18:14:37.291737       1 shared_informer.go:318] Caches are synced for garbage collector
	I0605 18:14:37.291776       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0605 18:14:37.297401       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [4d5bd8c85ccb6172975ab10caa1da5a458e7ccf843a84a1de3429849770fbbe8] <==
	* 
	* 
	* ==> kube-proxy [9f4e5a1f1b1697a4bdebccf594dd47b6d5cf9f07333d036d195bcc6f4826263a] <==
	* I0605 18:14:23.928017       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0605 18:14:23.928292       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0605 18:14:23.955513       1 server_others.go:551] "Using iptables proxy"
	I0605 18:14:24.375810       1 server_others.go:190] "Using iptables Proxier"
	I0605 18:14:24.377491       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0605 18:14:24.377554       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0605 18:14:24.377595       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0605 18:14:24.377716       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0605 18:14:24.386649       1 server.go:657] "Version info" version="v1.27.2"
	I0605 18:14:24.386761       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:14:24.395372       1 config.go:188] "Starting service config controller"
	I0605 18:14:24.404364       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0605 18:14:24.404509       1 config.go:97] "Starting endpoint slice config controller"
	I0605 18:14:24.404603       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0605 18:14:24.403549       1 config.go:315] "Starting node config controller"
	I0605 18:14:24.404933       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0605 18:14:24.504711       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0605 18:14:24.505024       1 shared_informer.go:318] Caches are synced for node config
	I0605 18:14:24.505281       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [f4898fe3b98d8da5dd96757f14bd45d4596b39d01316f166c39623a20ca9c09e] <==
	* 
	* 
	* ==> kube-scheduler [71d3d8c961e5398097e4f6db56499fa661e52ecb42ce4529773a11baf8e4738c] <==
	* 
	* 
	* ==> kube-scheduler [e523467def4fe834aba2811465974e217cf6ab15cdac91b0d11bb211b25cb3f2] <==
	* I0605 18:14:21.534896       1 serving.go:348] Generated self-signed cert in-memory
	I0605 18:14:25.147117       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0605 18:14:25.147149       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:14:25.152515       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0605 18:14:25.152604       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0605 18:14:25.152666       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0605 18:14:25.152715       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 18:14:25.152773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0605 18:14:25.152851       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0605 18:14:25.152934       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0605 18:14:25.153025       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0605 18:14:25.253237       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0605 18:14:25.253351       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 18:14:25.254327       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 05 18:14:23 pause-845789 kubelet[1396]: E0605 18:14:23.422153    1396 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-fhpqv_kube-system(30efb721-d346-4e6e-b034-3e3ec690f8e8)\"" pod="kube-system/coredns-5d78c9869d-fhpqv" podUID=30efb721-d346-4e6e-b034-3e3ec690f8e8
	Jun 05 18:14:23 pause-845789 kubelet[1396]: I0605 18:14:23.425644    1396 scope.go:115] "RemoveContainer" containerID="4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664"
	Jun 05 18:14:23 pause-845789 kubelet[1396]: E0605 18:14:23.426014    1396 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-lkkp2_kube-system(4f53c8cc-23b4-4731-9dc9-6c5c802d1224)\"" pod="kube-system/coredns-5d78c9869d-lkkp2" podUID=4f53c8cc-23b4-4731-9dc9-6c5c802d1224
	Jun 05 18:14:23 pause-845789 kubelet[1396]: E0605 18:14:23.641171    1396 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.109516    1396 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30efb721-d346-4e6e-b034-3e3ec690f8e8-config-volume\") pod \"30efb721-d346-4e6e-b034-3e3ec690f8e8\" (UID: \"30efb721-d346-4e6e-b034-3e3ec690f8e8\") "
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.109596    1396 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzqbv\" (UniqueName: \"kubernetes.io/projected/30efb721-d346-4e6e-b034-3e3ec690f8e8-kube-api-access-qzqbv\") pod \"30efb721-d346-4e6e-b034-3e3ec690f8e8\" (UID: \"30efb721-d346-4e6e-b034-3e3ec690f8e8\") "
	Jun 05 18:14:37 pause-845789 kubelet[1396]: W0605 18:14:37.110153    1396 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/30efb721-d346-4e6e-b034-3e3ec690f8e8/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.110321    1396 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30efb721-d346-4e6e-b034-3e3ec690f8e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "30efb721-d346-4e6e-b034-3e3ec690f8e8" (UID: "30efb721-d346-4e6e-b034-3e3ec690f8e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.115154    1396 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30efb721-d346-4e6e-b034-3e3ec690f8e8-kube-api-access-qzqbv" (OuterVolumeSpecName: "kube-api-access-qzqbv") pod "30efb721-d346-4e6e-b034-3e3ec690f8e8" (UID: "30efb721-d346-4e6e-b034-3e3ec690f8e8"). InnerVolumeSpecName "kube-api-access-qzqbv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.210553    1396 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30efb721-d346-4e6e-b034-3e3ec690f8e8-config-volume\") on node \"pause-845789\" DevicePath \"\""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.210609    1396 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qzqbv\" (UniqueName: \"kubernetes.io/projected/30efb721-d346-4e6e-b034-3e3ec690f8e8-kube-api-access-qzqbv\") on node \"pause-845789\" DevicePath \"\""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.606065    1396 scope.go:115] "RemoveContainer" containerID="0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e"
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.121584    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/576f237d6e37d705941c6d8e7eadefe4c01b2f05938e6892b4f394000e679e4d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/576f237d6e37d705941c6d8e7eadefe4c01b2f05938e6892b4f394000e679e4d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-845789_eb7f52eaa379a771b8ee863a4defd4a2/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-845789_eb7f52eaa379a771b8ee863a4defd4a2/kube-apiserver/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.132385    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cbf434e84c7c6a4e33cc901325079e034c6754bf9df711515749d2960d57c415/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cbf434e84c7c6a4e33cc901325079e034c6754bf9df711515749d2960d57c415/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-845789_0a0ee1a52c371d2b931ced33f5032a1a/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-845789_0a0ee1a52c371d2b931ced33f5032a1a/kube-controller-manager/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.178910    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/db3864da828851854e75e1fb6cc1184ea95ecce1e413e0b4fe577b1fa163e7b8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/db3864da828851854e75e1fb6cc1184ea95ecce1e413e0b4fe577b1fa163e7b8/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-845789_858c88b0c315047f256d62cb236b388e/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-845789_858c88b0c315047f256d62cb236b388e/kube-scheduler/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.194568    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6fb5bc48e990b4cef32f23c979b6fd90131ef1315c4503072c430de6b0e2196b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6fb5bc48e990b4cef32f23c979b6fd90131ef1315c4503072c430de6b0e2196b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-845789_8170b16e1ee4df8d10045d53fe9f580f/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-845789_8170b16e1ee4df8d10045d53fe9f580f/etcd/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: I0605 18:14:38.209170    1396 scope.go:115] "RemoveContainer" containerID="4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664"
	Jun 05 18:14:38 pause-845789 kubelet[1396]: I0605 18:14:38.210900    1396 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=30efb721-d346-4e6e-b034-3e3ec690f8e8 path="/var/lib/kubelet/pods/30efb721-d346-4e6e-b034-3e3ec690f8e8/volumes"
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.333768    1396 manager.go:1106] Failed to create existing container: /docker/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/crio/crio-49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Error finding container 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Status 404 returned error can't find the container with id 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.343265    1396 manager.go:1106] Failed to create existing container: /crio/crio-5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Error finding container 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Status 404 returned error can't find the container with id 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.343719    1396 manager.go:1106] Failed to create existing container: /crio/crio-49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Error finding container 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Status 404 returned error can't find the container with id 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.366973    1396 manager.go:1106] Failed to create existing container: /crio/crio-36e126b3e99c19b7f0637150f871141ceb4d1ed86d92e90f239175af67f78b3b: Error finding container 36e126b3e99c19b7f0637150f871141ceb4d1ed86d92e90f239175af67f78b3b: Status 404 returned error can't find the container with id 36e126b3e99c19b7f0637150f871141ceb4d1ed86d92e90f239175af67f78b3b
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.374551    1396 manager.go:1106] Failed to create existing container: /crio/crio-3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Error finding container 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Status 404 returned error can't find the container with id 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.379377    1396 manager.go:1106] Failed to create existing container: /docker/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/crio/crio-5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Error finding container 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Status 404 returned error can't find the container with id 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.380646    1396 manager.go:1106] Failed to create existing container: /docker/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/crio/crio-3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Error finding container 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Status 404 returned error can't find the container with id 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0605 18:14:41.181289  537870 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/16634-402421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-845789 -n pause-845789
helpers_test.go:261: (dbg) Run:  kubectl --context pause-845789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-845789
helpers_test.go:235: (dbg) docker inspect pause-845789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12",
	        "Created": "2023-06-05T18:13:13.694452925Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 531145,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-05T18:13:14.043858497Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:80ea0da8caa6eb7997e8d55fe8736424844c5160aabf0e85547dc140c538e81f",
	        "ResolvConfPath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/hostname",
	        "HostsPath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/hosts",
	        "LogPath": "/var/lib/docker/containers/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12-json.log",
	        "Name": "/pause-845789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-845789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-845789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057-init/diff:/var/lib/docker/overlay2/12deadd96699cc2736cf6d24a9900cb6d72f9bc5f3f15d793b28adb475def155/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2c8103ebcfedecd65cbcbdea301d8f40d4c84e6bd32ff57423aa2edf8936057/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-845789",
	                "Source": "/var/lib/docker/volumes/pause-845789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-845789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-845789",
	                "name.minikube.sigs.k8s.io": "pause-845789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5843eff4316420c067d3eb705229d20da8b236fb69655e8348e30e7d12d66c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33313"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33312"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33309"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33311"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33310"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5843eff4316",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-845789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bebf24cd58e9",
	                        "pause-845789"
	                    ],
	                    "NetworkID": "f3557e92d1538057696618098145c870a6c1c323dddae88497e1195c6dfdcb37",
	                    "EndpointID": "5e6b6d2666fca9bbceabc4ae6a01929f2260c550cfb85f20e81682f43628b2f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-845789 -n pause-845789
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-845789 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-845789 logs -n 25: (2.598594902s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p scheduled-stop-639332       | scheduled-stop-639332       | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC | 05 Jun 23 18:07 UTC |
	| start   | -p insufficient-storage-035590 | insufficient-storage-035590 | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-035590 | insufficient-storage-035590 | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC | 05 Jun 23 18:07 UTC |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC |                     |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:07 UTC | 05 Jun 23 18:08 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	|         | --no-kubernetes                |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063572 sudo    | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	| start   | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-063572 sudo    | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC |                     |
	|         | systemctl is-active --quiet    |                             |         |         |                     |                     |
	|         | service kubelet                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-063572         | NoKubernetes-063572         | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:08 UTC |
	| start   | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:08 UTC | 05 Jun 23 18:09 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-870219      | missing-upgrade-870219      | jenkins | v1.30.1 | 05 Jun 23 18:09 UTC | 05 Jun 23 18:09 UTC |
	| stop    | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:09 UTC | 05 Jun 23 18:09 UTC |
	| start   | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:09 UTC | 05 Jun 23 18:14 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p stopped-upgrade-266335      | stopped-upgrade-266335      | jenkins | v1.30.1 | 05 Jun 23 18:11 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p stopped-upgrade-266335      | stopped-upgrade-266335      | jenkins | v1.30.1 | 05 Jun 23 18:11 UTC | 05 Jun 23 18:11 UTC |
	| start   | -p running-upgrade-783662      | running-upgrade-783662      | jenkins | v1.30.1 | 05 Jun 23 18:12 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-783662      | running-upgrade-783662      | jenkins | v1.30.1 | 05 Jun 23 18:13 UTC | 05 Jun 23 18:13 UTC |
	| start   | -p pause-845789 --memory=2048  | pause-845789                | jenkins | v1.30.1 | 05 Jun 23 18:13 UTC | 05 Jun 23 18:13 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-845789                | pause-845789                | jenkins | v1.30.1 | 05 Jun 23 18:13 UTC | 05 Jun 23 18:14 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:14 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-987814   | kubernetes-upgrade-987814   | jenkins | v1.30.1 | 05 Jun 23 18:14 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 18:14:43
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 18:14:43.202656  538143 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:14:43.202884  538143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:14:43.202905  538143 out.go:309] Setting ErrFile to fd 2...
	I0605 18:14:43.202925  538143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:14:43.203104  538143 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:14:43.203511  538143 out.go:303] Setting JSON to false
	I0605 18:14:43.204654  538143 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10616,"bootTime":1685978268,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:14:43.204754  538143 start.go:137] virtualization:  
	I0605 18:14:43.208094  538143 out.go:177] * [kubernetes-upgrade-987814] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:14:43.211420  538143 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:14:43.211508  538143 notify.go:220] Checking for updates...
	I0605 18:14:43.217288  538143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:14:43.219471  538143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:14:43.221981  538143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:14:43.224123  538143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:14:43.226604  538143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:14:43.229485  538143 config.go:182] Loaded profile config "kubernetes-upgrade-987814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:14:43.230109  538143 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:14:43.289309  538143 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:14:43.289467  538143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:14:43.397569  538143 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 18:14:43.384380286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:14:43.397674  538143 docker.go:294] overlay module found
	I0605 18:14:43.400105  538143 out.go:177] * Using the docker driver based on existing profile
	I0605 18:14:43.402028  538143 start.go:297] selected driver: docker
	I0605 18:14:43.402048  538143 start.go:875] validating driver "docker" against &{Name:kubernetes-upgrade-987814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-987814 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 18:14:43.402163  538143 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:14:43.402976  538143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:14:43.498350  538143 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 18:14:43.4884358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:14:43.498674  538143 cni.go:84] Creating CNI manager for ""
	I0605 18:14:43.498692  538143 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 18:14:43.498704  538143 start_flags.go:319] config:
	{Name:kubernetes-upgrade-987814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-987814 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP:}
	I0605 18:14:43.501335  538143 out.go:177] * Starting control plane node kubernetes-upgrade-987814 in cluster kubernetes-upgrade-987814
	I0605 18:14:43.503448  538143 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 18:14:43.505794  538143 out.go:177] * Pulling base image ...
	I0605 18:14:43.508229  538143 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 18:14:43.508288  538143 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 18:14:43.508306  538143 cache.go:57] Caching tarball of preloaded images
	I0605 18:14:43.508410  538143 preload.go:174] Found /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0605 18:14:43.508425  538143 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 18:14:43.508540  538143 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kubernetes-upgrade-987814/config.json ...
	I0605 18:14:43.508815  538143 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 18:14:43.537775  538143 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon, skipping pull
	I0605 18:14:43.537816  538143 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in daemon, skipping load
	I0605 18:14:43.537839  538143 cache.go:195] Successfully downloaded all kic artifacts
	I0605 18:14:43.537867  538143 start.go:364] acquiring machines lock for kubernetes-upgrade-987814: {Name:mk09849652fa096fffe72c5d30ecbcb3ea64f297 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0605 18:14:43.537960  538143 start.go:368] acquired machines lock for "kubernetes-upgrade-987814" in 55.098µs
	I0605 18:14:43.537982  538143 start.go:96] Skipping create...Using existing machine configuration
	I0605 18:14:43.537989  538143 fix.go:55] fixHost starting: 
	I0605 18:14:43.538280  538143 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-987814 --format={{.State.Status}}
	I0605 18:14:43.567299  538143 fix.go:103] recreateIfNeeded on kubernetes-upgrade-987814: state=Running err=<nil>
	W0605 18:14:43.567334  538143 fix.go:129] unexpected machine state, will restart: <nil>
	I0605 18:14:43.569486  538143 out.go:177] * Updating the running docker "kubernetes-upgrade-987814" container ...
	
	* 
	* ==> CRI-O <==
	* Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.895116030Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.895133925Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.911685057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jun 05 18:14:23 pause-845789 crio[2699]: time="2023-06-05 18:14:23.911721480Z" level=info msg="Updated default CNI network name to kindnet"
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.941174711Z" level=info msg="Stopping pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=6dadb2a5-3f10-4617-9238-44fb1a9d5962 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.941421652Z" level=info msg="Got pod network &{Name:coredns-5d78c9869d-fhpqv Namespace:kube-system ID:3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f UID:30efb721-d346-4e6e-b034-3e3ec690f8e8 NetNS:/var/run/netns/2f3d13eb-5fb4-42e4-9049-3577ac586d7b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.941560500Z" level=info msg="Deleting pod kube-system_coredns-5d78c9869d-fhpqv from CNI network \"kindnet\" (type=ptp)"
	Jun 05 18:14:36 pause-845789 crio[2699]: time="2023-06-05 18:14:36.984893246Z" level=info msg="Stopped pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=6dadb2a5-3f10-4617-9238-44fb1a9d5962 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:37 pause-845789 crio[2699]: time="2023-06-05 18:14:37.608927878Z" level=info msg="Removing container: 0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e" id=c532cb0e-36e7-48af-ab91-0adcf28977ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 18:14:37 pause-845789 crio[2699]: time="2023-06-05 18:14:37.638536135Z" level=info msg="Removed container 0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e: kube-system/coredns-5d78c9869d-fhpqv/coredns" id=c532cb0e-36e7-48af-ab91-0adcf28977ed name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.033364694Z" level=info msg="Stopping pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=890b88f6-0466-4e97-a75b-11c559bb1c69 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.033416493Z" level=info msg="Stopped pod sandbox (already stopped): 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=890b88f6-0466-4e97-a75b-11c559bb1c69 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.033871057Z" level=info msg="Removing pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=9676990c-40c7-4e6c-8725-12a76515eb91 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.042160765Z" level=info msg="Removed pod sandbox: 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f" id=9676990c-40c7-4e6c-8725-12a76515eb91 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.210111087Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=ab917904-de9d-490f-818e-9a59074bdc87 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.210394213Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ab917904-de9d-490f-818e-9a59074bdc87 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.211572461Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=b3a42ddc-e906-4a75-8246-921d7a2fe013 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.211773594Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3a42ddc-e906-4a75-8246-921d7a2fe013 name=/runtime.v1.ImageService/ImageStatus
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.213037848Z" level=info msg="Creating container: kube-system/coredns-5d78c9869d-lkkp2/coredns" id=8d2c7e65-3f54-4934-92d4-15895d3a135e name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.213142340Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.226403642Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4a55f2fd1b6b2613539dced2c2cf3869a5919ac7c322a8896851b37fb51d5bac/merged/etc/passwd: no such file or directory"
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.226460356Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4a55f2fd1b6b2613539dced2c2cf3869a5919ac7c322a8896851b37fb51d5bac/merged/etc/group: no such file or directory"
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.295287725Z" level=info msg="Created container a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5: kube-system/coredns-5d78c9869d-lkkp2/coredns" id=8d2c7e65-3f54-4934-92d4-15895d3a135e name=/runtime.v1.RuntimeService/CreateContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.296333904Z" level=info msg="Starting container: a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5" id=0d37683c-c6d5-493f-b809-4fdbdc1b7cde name=/runtime.v1.RuntimeService/StartContainer
	Jun 05 18:14:38 pause-845789 crio[2699]: time="2023-06-05 18:14:38.311269749Z" level=info msg="Started container" PID=3496 containerID=a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5 description=kube-system/coredns-5d78c9869d-lkkp2/coredns id=0d37683c-c6d5-493f-b809-4fdbdc1b7cde name=/runtime.v1.RuntimeService/StartContainer sandboxID=49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a8ee47583a79d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   7 seconds ago       Running             coredns                   2                   49ed23ca3133e       coredns-5d78c9869d-lkkp2
	1de7fa90fe9ac       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   28 seconds ago      Running             kube-controller-manager   2                   5f925fea82d08       kube-controller-manager-pause-845789
	9f4e5a1f1b169       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   28 seconds ago      Running             kube-proxy                2                   5f6c63a8a4eee       kube-proxy-hkn5d
	e523467def4fe       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   28 seconds ago      Running             kube-scheduler            2                   c173e49d19449       kube-scheduler-pause-845789
	1b213002395e7       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   28 seconds ago      Running             kindnet-cni               2                   36e126b3e99c1       kindnet-qfl6w
	9c9eeb2c7e3ec       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   28 seconds ago      Running             kube-apiserver            2                   13f1aadfbeac9       kube-apiserver-pause-845789
	b656bdc2a1d92       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   28 seconds ago      Running             etcd                      2                   df76f33ae89d6       etcd-pause-845789
	f5667873af887       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   41 seconds ago      Exited              kindnet-cni               1                   36e126b3e99c1       kindnet-qfl6w
	c9b4f7001f3c1       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   41 seconds ago      Exited              etcd                      1                   df76f33ae89d6       etcd-pause-845789
	4fa0d894ad162       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   41 seconds ago      Exited              coredns                   1                   49ed23ca3133e       coredns-5d78c9869d-lkkp2
	71d3d8c961e53       305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840   41 seconds ago      Exited              kube-scheduler            1                   c173e49d19449       kube-scheduler-pause-845789
	f4898fe3b98d8       29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0   41 seconds ago      Exited              kube-proxy                1                   5f6c63a8a4eee       kube-proxy-hkn5d
	9d612546a5b5f       72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae   41 seconds ago      Exited              kube-apiserver            1                   13f1aadfbeac9       kube-apiserver-pause-845789
	4d5bd8c85ccb6       2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4   41 seconds ago      Exited              kube-controller-manager   1                   5f925fea82d08       kube-controller-manager-pause-845789
	
	* 
	* ==> coredns [4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:58177 - 51918 "HINFO IN 2750685008957993427.78133201232048018. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014244974s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [a8ee47583a79d548703a4cb8b758115326cf56bfefa96699e29fea9b33c518d5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35274 - 4006 "HINFO IN 3462443190073411497.836027910425017106. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015182992s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-845789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-845789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b059332e570e1d712234ec4f823aa77854e7956d
	                    minikube.k8s.io/name=pause-845789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_05T18_13_39_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Jun 2023 18:13:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-845789
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Jun 2023 18:14:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Jun 2023 18:13:53 +0000   Mon, 05 Jun 2023 18:13:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-845789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022624Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfbe3c9adb2f42edbfff435b13af338f
	  System UUID:                e3eb05d2-752e-48b1-b77a-1aa1b8da647c
	  Boot ID:                    da2c815d-c926-431d-a79c-25e8afa61b1d
	  Kernel Version:             5.15.0-1037-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-lkkp2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     55s
	  kube-system                 etcd-pause-845789                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kindnet-qfl6w                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      56s
	  kube-system                 kube-apiserver-pause-845789             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-pause-845789    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-hkn5d                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-pause-845789             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node pause-845789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node pause-845789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x8 over 77s)  kubelet          Node pause-845789 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x2 over 68s)  kubelet          Node pause-845789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x2 over 68s)  kubelet          Node pause-845789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x2 over 68s)  kubelet          Node pause-845789 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node pause-845789 event: Registered Node pause-845789 in Controller
	  Normal  NodeReady                53s                kubelet          Node pause-845789 status is now: NodeReady
	  Normal  RegisteredNode           10s                node-controller  Node pause-845789 event: Registered Node pause-845789 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001064] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001075] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +0.004539] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=00000000e785b4d1
	[  +0.001078] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000723] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000005f019a4a
	[  +0.001044] FS-Cache: N-key=[8] 'd1d1c90000000000'
	[  +3.062644] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=0000000061ba42e8
	[  +0.001124] FS-Cache: O-key=[8] 'd0d1c90000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=00000000e03af87c
	[  +0.001040] FS-Cache: N-key=[8] 'd0d1c90000000000'
	[  +0.324591] FS-Cache: Duplicate cookie detected
	[  +0.000707] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000983] FS-Cache: O-cookie d=000000006a062106{9p.inode} n=000000004b485b91
	[  +0.001042] FS-Cache: O-key=[8] 'd6d1c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000006a062106{9p.inode} n=000000003ad92423
	[  +0.001057] FS-Cache: N-key=[8] 'd6d1c90000000000'
	
	* 
	* ==> etcd [b656bdc2a1d922eb68300044a25bb91693982de4dc391a844ec8790ac84e5d25] <==
	* {"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-06-05T18:14:18.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:18.408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:18.423Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-05T18:14:18.444Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T18:14:18.454Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:18.454Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:18.454Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-06-05T18:14:19.669Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T18:14:19.670Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-06-05T18:14:19.671Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-05T18:14:19.673Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-05T18:14:19.669Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-845789 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-05T18:14:19.676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-05T18:14:19.676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [c9b4f7001f3c15228d03cea251b3deb8ed506d11399610450285f98d983fda93] <==
	* {"level":"info","ts":"2023-06-05T18:14:05.634Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"31.588994ms"}
	{"level":"info","ts":"2023-06-05T18:14:05.763Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-06-05T18:14:05.775Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","commit-index":436}
	{"level":"info","ts":"2023-06-05T18:14:05.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=()"}
	{"level":"info","ts":"2023-06-05T18:14:05.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became follower at term 2"}
	{"level":"info","ts":"2023-06-05T18:14:05.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ea7e25599daad906 [peers: [], term: 2, commit: 436, applied: 0, lastindex: 436, lastterm: 2]"}
	{"level":"warn","ts":"2023-06-05T18:14:05.806Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-06-05T18:14:05.837Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":424}
	{"level":"info","ts":"2023-06-05T18:14:05.853Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"ea7e25599daad906","timeout":"7s"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-06-05T18:14:05.903Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:05.904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-05T18:14:05.943Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:05.944Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:05.944Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-06-05T18:14:05.946Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-05T18:14:05.947Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> kernel <==
	*  18:14:46 up  2:56,  0 users,  load average: 4.54, 2.90, 2.29
	Linux pause-845789 5.15.0-1037-aws #41~20.04.1-Ubuntu SMP Mon May 22 18:20:20 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [1b213002395e708566f6f5b5a649065e0692eb8150048b9718e252ff3bc428c5] <==
	* I0605 18:14:18.063206       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0605 18:14:18.128471       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0605 18:14:18.129312       1 main.go:116] setting mtu 1500 for CNI 
	I0605 18:14:18.129676       1 main.go:146] kindnetd IP family: "ipv4"
	I0605 18:14:18.129840       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0605 18:14:18.471756       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0605 18:14:18.472125       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0605 18:14:23.867499       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0605 18:14:23.867788       1 main.go:227] handling current node
	I0605 18:14:33.883728       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0605 18:14:33.883761       1 main.go:227] handling current node
	I0605 18:14:43.900175       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0605 18:14:43.900286       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [f5667873af88719df6b5dedf9f489eb022c51271f2cd1b3eb469e0d4dacc97a7] <==
	* I0605 18:14:05.073105       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0605 18:14:05.073186       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0605 18:14:05.073365       1 main.go:116] setting mtu 1500 for CNI 
	I0605 18:14:05.073382       1 main.go:146] kindnetd IP family: "ipv4"
	I0605 18:14:05.073398       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0605 18:14:05.803028       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0605 18:14:05.803341       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [9c9eeb2c7e3ec57b2599949a59a0e4a094a494a361b9793dd0a9962db1cc3f1b] <==
	* I0605 18:14:23.427782       1 naming_controller.go:291] Starting NamingConditionController
	I0605 18:14:23.427801       1 establishing_controller.go:76] Starting EstablishingController
	I0605 18:14:23.427820       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0605 18:14:23.427832       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0605 18:14:23.427845       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0605 18:14:23.427880       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0605 18:14:23.440139       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0605 18:14:23.441588       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0605 18:14:23.441611       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0605 18:14:23.779697       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0605 18:14:23.816424       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0605 18:14:23.851501       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0605 18:14:23.851583       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0605 18:14:23.862331       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0605 18:14:23.862361       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0605 18:14:23.862457       1 cache.go:39] Caches are synced for autoregister controller
	I0605 18:14:23.862599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0605 18:14:23.875776       1 shared_informer.go:318] Caches are synced for configmaps
	I0605 18:14:23.875860       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0605 18:14:23.877264       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	E0605 18:14:23.897071       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0605 18:14:24.466412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0605 18:14:36.825559       1 controller.go:624] quota admission added evaluator for: endpoints
	I0605 18:14:36.877704       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0605 18:14:36.900429       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [9d612546a5b5f10d19269ed6d4beb3cf134ca893d0c7b8c4eac3c5d0535e98fb] <==
	* I0605 18:14:05.855806       1 server.go:551] external host was not specified, using 192.168.76.2
	I0605 18:14:05.869912       1 server.go:165] Version: v1.27.2
	I0605 18:14:05.871245       1 server.go:167] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	* 
	* ==> kube-controller-manager [1de7fa90fe9accbdc1d2f5695929320c8eace2374145be13c6ab58965c9a3dcd] <==
	* I0605 18:14:36.808067       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-845789"
	I0605 18:14:36.808889       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0605 18:14:36.808991       1 shared_informer.go:318] Caches are synced for expand
	I0605 18:14:36.809075       1 taint_manager.go:211] "Sending events to api server"
	I0605 18:14:36.809555       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0605 18:14:36.809627       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0605 18:14:36.808919       1 shared_informer.go:318] Caches are synced for stateful set
	I0605 18:14:36.810344       1 event.go:307] "Event occurred" object="pause-845789" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-845789 event: Registered Node pause-845789 in Controller"
	I0605 18:14:36.808930       1 shared_informer.go:318] Caches are synced for namespace
	I0605 18:14:36.808935       1 shared_informer.go:318] Caches are synced for daemon sets
	I0605 18:14:36.816911       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0605 18:14:36.820145       1 shared_informer.go:318] Caches are synced for persistent volume
	I0605 18:14:36.821825       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0605 18:14:36.821962       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0605 18:14:36.821979       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0605 18:14:36.829338       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0605 18:14:36.837923       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0605 18:14:36.853694       1 shared_informer.go:318] Caches are synced for attach detach
	I0605 18:14:36.892086       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 18:14:36.902858       1 shared_informer.go:318] Caches are synced for resource quota
	I0605 18:14:36.919112       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0605 18:14:36.943002       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-fhpqv"
	I0605 18:14:37.291737       1 shared_informer.go:318] Caches are synced for garbage collector
	I0605 18:14:37.291776       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0605 18:14:37.297401       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [4d5bd8c85ccb6172975ab10caa1da5a458e7ccf843a84a1de3429849770fbbe8] <==
	* 
	* 
	* ==> kube-proxy [9f4e5a1f1b1697a4bdebccf594dd47b6d5cf9f07333d036d195bcc6f4826263a] <==
	* I0605 18:14:23.928017       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0605 18:14:23.928292       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0605 18:14:23.955513       1 server_others.go:551] "Using iptables proxy"
	I0605 18:14:24.375810       1 server_others.go:190] "Using iptables Proxier"
	I0605 18:14:24.377491       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0605 18:14:24.377554       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0605 18:14:24.377595       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0605 18:14:24.377716       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0605 18:14:24.386649       1 server.go:657] "Version info" version="v1.27.2"
	I0605 18:14:24.386761       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:14:24.395372       1 config.go:188] "Starting service config controller"
	I0605 18:14:24.404364       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0605 18:14:24.404509       1 config.go:97] "Starting endpoint slice config controller"
	I0605 18:14:24.404603       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0605 18:14:24.403549       1 config.go:315] "Starting node config controller"
	I0605 18:14:24.404933       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0605 18:14:24.504711       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0605 18:14:24.505024       1 shared_informer.go:318] Caches are synced for node config
	I0605 18:14:24.505281       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [f4898fe3b98d8da5dd96757f14bd45d4596b39d01316f166c39623a20ca9c09e] <==
	* 
	* 
	* ==> kube-scheduler [71d3d8c961e5398097e4f6db56499fa661e52ecb42ce4529773a11baf8e4738c] <==
	* 
	* 
	* ==> kube-scheduler [e523467def4fe834aba2811465974e217cf6ab15cdac91b0d11bb211b25cb3f2] <==
	* I0605 18:14:21.534896       1 serving.go:348] Generated self-signed cert in-memory
	I0605 18:14:25.147117       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0605 18:14:25.147149       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0605 18:14:25.152515       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0605 18:14:25.152604       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0605 18:14:25.152666       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0605 18:14:25.152715       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 18:14:25.152773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0605 18:14:25.152851       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0605 18:14:25.152934       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0605 18:14:25.153025       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0605 18:14:25.253237       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0605 18:14:25.253351       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0605 18:14:25.254327       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 05 18:14:23 pause-845789 kubelet[1396]: E0605 18:14:23.422153    1396 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-fhpqv_kube-system(30efb721-d346-4e6e-b034-3e3ec690f8e8)\"" pod="kube-system/coredns-5d78c9869d-fhpqv" podUID=30efb721-d346-4e6e-b034-3e3ec690f8e8
	Jun 05 18:14:23 pause-845789 kubelet[1396]: I0605 18:14:23.425644    1396 scope.go:115] "RemoveContainer" containerID="4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664"
	Jun 05 18:14:23 pause-845789 kubelet[1396]: E0605 18:14:23.426014    1396 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-lkkp2_kube-system(4f53c8cc-23b4-4731-9dc9-6c5c802d1224)\"" pod="kube-system/coredns-5d78c9869d-lkkp2" podUID=4f53c8cc-23b4-4731-9dc9-6c5c802d1224
	Jun 05 18:14:23 pause-845789 kubelet[1396]: E0605 18:14:23.641171    1396 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.109516    1396 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30efb721-d346-4e6e-b034-3e3ec690f8e8-config-volume\") pod \"30efb721-d346-4e6e-b034-3e3ec690f8e8\" (UID: \"30efb721-d346-4e6e-b034-3e3ec690f8e8\") "
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.109596    1396 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzqbv\" (UniqueName: \"kubernetes.io/projected/30efb721-d346-4e6e-b034-3e3ec690f8e8-kube-api-access-qzqbv\") pod \"30efb721-d346-4e6e-b034-3e3ec690f8e8\" (UID: \"30efb721-d346-4e6e-b034-3e3ec690f8e8\") "
	Jun 05 18:14:37 pause-845789 kubelet[1396]: W0605 18:14:37.110153    1396 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/30efb721-d346-4e6e-b034-3e3ec690f8e8/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.110321    1396 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30efb721-d346-4e6e-b034-3e3ec690f8e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "30efb721-d346-4e6e-b034-3e3ec690f8e8" (UID: "30efb721-d346-4e6e-b034-3e3ec690f8e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.115154    1396 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30efb721-d346-4e6e-b034-3e3ec690f8e8-kube-api-access-qzqbv" (OuterVolumeSpecName: "kube-api-access-qzqbv") pod "30efb721-d346-4e6e-b034-3e3ec690f8e8" (UID: "30efb721-d346-4e6e-b034-3e3ec690f8e8"). InnerVolumeSpecName "kube-api-access-qzqbv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.210553    1396 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30efb721-d346-4e6e-b034-3e3ec690f8e8-config-volume\") on node \"pause-845789\" DevicePath \"\""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.210609    1396 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qzqbv\" (UniqueName: \"kubernetes.io/projected/30efb721-d346-4e6e-b034-3e3ec690f8e8-kube-api-access-qzqbv\") on node \"pause-845789\" DevicePath \"\""
	Jun 05 18:14:37 pause-845789 kubelet[1396]: I0605 18:14:37.606065    1396 scope.go:115] "RemoveContainer" containerID="0697dd4705d853e12210ccf5db442dc6f56f033e9df790b5229be8ea9f99475e"
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.121584    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/576f237d6e37d705941c6d8e7eadefe4c01b2f05938e6892b4f394000e679e4d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/576f237d6e37d705941c6d8e7eadefe4c01b2f05938e6892b4f394000e679e4d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-845789_eb7f52eaa379a771b8ee863a4defd4a2/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-845789_eb7f52eaa379a771b8ee863a4defd4a2/kube-apiserver/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.132385    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cbf434e84c7c6a4e33cc901325079e034c6754bf9df711515749d2960d57c415/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cbf434e84c7c6a4e33cc901325079e034c6754bf9df711515749d2960d57c415/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-845789_0a0ee1a52c371d2b931ced33f5032a1a/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-845789_0a0ee1a52c371d2b931ced33f5032a1a/kube-controller-manager/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.178910    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/db3864da828851854e75e1fb6cc1184ea95ecce1e413e0b4fe577b1fa163e7b8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/db3864da828851854e75e1fb6cc1184ea95ecce1e413e0b4fe577b1fa163e7b8/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-845789_858c88b0c315047f256d62cb236b388e/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-845789_858c88b0c315047f256d62cb236b388e/kube-scheduler/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.194568    1396 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6fb5bc48e990b4cef32f23c979b6fd90131ef1315c4503072c430de6b0e2196b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6fb5bc48e990b4cef32f23c979b6fd90131ef1315c4503072c430de6b0e2196b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-845789_8170b16e1ee4df8d10045d53fe9f580f/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-845789_8170b16e1ee4df8d10045d53fe9f580f/etcd/0.log: no such file or directory
	Jun 05 18:14:38 pause-845789 kubelet[1396]: I0605 18:14:38.209170    1396 scope.go:115] "RemoveContainer" containerID="4fa0d894ad1629fc01f6d8b191a6350242e7cac66cac05705b065bc6f0d07664"
	Jun 05 18:14:38 pause-845789 kubelet[1396]: I0605 18:14:38.210900    1396 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=30efb721-d346-4e6e-b034-3e3ec690f8e8 path="/var/lib/kubelet/pods/30efb721-d346-4e6e-b034-3e3ec690f8e8/volumes"
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.333768    1396 manager.go:1106] Failed to create existing container: /docker/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/crio/crio-49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Error finding container 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Status 404 returned error can't find the container with id 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.343265    1396 manager.go:1106] Failed to create existing container: /crio/crio-5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Error finding container 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Status 404 returned error can't find the container with id 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.343719    1396 manager.go:1106] Failed to create existing container: /crio/crio-49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Error finding container 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf: Status 404 returned error can't find the container with id 49ed23ca3133e19db47f7a15bdb3093dcaa518de8802e9884f7a4726f18f1ddf
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.366973    1396 manager.go:1106] Failed to create existing container: /crio/crio-36e126b3e99c19b7f0637150f871141ceb4d1ed86d92e90f239175af67f78b3b: Error finding container 36e126b3e99c19b7f0637150f871141ceb4d1ed86d92e90f239175af67f78b3b: Status 404 returned error can't find the container with id 36e126b3e99c19b7f0637150f871141ceb4d1ed86d92e90f239175af67f78b3b
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.374551    1396 manager.go:1106] Failed to create existing container: /crio/crio-3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Error finding container 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Status 404 returned error can't find the container with id 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.379377    1396 manager.go:1106] Failed to create existing container: /docker/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/crio/crio-5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Error finding container 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2: Status 404 returned error can't find the container with id 5f6c63a8a4eee07657d2889339d09fdcc90e1001aa0696ca4d5d5720facfb5b2
	Jun 05 18:14:38 pause-845789 kubelet[1396]: E0605 18:14:38.380646    1396 manager.go:1106] Failed to create existing container: /docker/bebf24cd58e99f081e5ad187c98456eb5d54b534a57b910931797539391a8a12/crio/crio-3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Error finding container 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f: Status 404 returned error can't find the container with id 3e7769c067e313759f0746b65258d17ffaf6d9b36010e958088005d0c8625e6f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-845789 -n pause-845789
helpers_test.go:261: (dbg) Run:  kubectl --context pause-845789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (52.10s)

                                                
                                    

Test pass (259/296)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.4
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.27.2/json-events 10.81
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.6
22 TestAddons/Setup 167.95
26 TestAddons/parallel/InspektorGadget 10.62
27 TestAddons/parallel/MetricsServer 5.66
30 TestAddons/parallel/CSI 50.5
31 TestAddons/parallel/Headlamp 12.99
32 TestAddons/parallel/CloudSpanner 5.53
35 TestAddons/serial/GCPAuth/Namespaces 0.26
36 TestAddons/StoppedEnableDisable 12.34
37 TestCertOptions 35.88
38 TestCertExpiration 244.79
40 TestForceSystemdFlag 47.3
41 TestForceSystemdEnv 44.88
46 TestErrorSpam/setup 34.15
47 TestErrorSpam/start 0.83
48 TestErrorSpam/status 1.16
49 TestErrorSpam/pause 1.86
50 TestErrorSpam/unpause 1.99
51 TestErrorSpam/stop 1.48
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 76.81
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 42.3
58 TestFunctional/serial/KubeContext 0.08
59 TestFunctional/serial/KubectlGetPods 0.11
62 TestFunctional/serial/CacheCmd/cache/add_remote 4.19
63 TestFunctional/serial/CacheCmd/cache/add_local 1.05
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
65 TestFunctional/serial/CacheCmd/cache/list 0.06
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
67 TestFunctional/serial/CacheCmd/cache/cache_reload 2.26
68 TestFunctional/serial/CacheCmd/cache/delete 0.12
69 TestFunctional/serial/MinikubeKubectlCmd 0.14
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
71 TestFunctional/serial/ExtraConfig 32.19
72 TestFunctional/serial/ComponentHealth 0.1
73 TestFunctional/serial/LogsCmd 2.05
74 TestFunctional/serial/LogsFileCmd 1.93
76 TestFunctional/parallel/ConfigCmd 0.47
77 TestFunctional/parallel/DashboardCmd 13.05
78 TestFunctional/parallel/DryRun 0.47
79 TestFunctional/parallel/InternationalLanguage 0.22
80 TestFunctional/parallel/StatusCmd 1.34
84 TestFunctional/parallel/ServiceCmdConnect 10.79
85 TestFunctional/parallel/AddonsCmd 0.19
86 TestFunctional/parallel/PersistentVolumeClaim 25
88 TestFunctional/parallel/SSHCmd 0.75
89 TestFunctional/parallel/CpCmd 1.67
91 TestFunctional/parallel/FileSync 0.4
92 TestFunctional/parallel/CertSync 2.49
96 TestFunctional/parallel/NodeLabels 0.14
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.85
100 TestFunctional/parallel/License 0.41
102 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.77
103 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
105 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
107 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
111 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
112 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
114 TestFunctional/parallel/ProfileCmd/profile_list 0.41
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
116 TestFunctional/parallel/MountCmd/any-port 8.74
117 TestFunctional/parallel/ServiceCmd/List 0.6
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
120 TestFunctional/parallel/ServiceCmd/Format 0.42
121 TestFunctional/parallel/ServiceCmd/URL 0.42
122 TestFunctional/parallel/MountCmd/specific-port 2.17
123 TestFunctional/parallel/MountCmd/VerifyCleanup 2.09
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 0.75
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.14
131 TestFunctional/parallel/ImageCommands/Setup 1.95
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.53
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.1
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.06
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.94
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.78
142 TestFunctional/delete_addon-resizer_images 0.09
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 99.89
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.81
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.46
155 TestJSONOutput/start/Command 75.85
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.85
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.74
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 5.91
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.24
180 TestKicCustomNetwork/create_custom_network 44.36
181 TestKicCustomNetwork/use_default_bridge_network 37.34
182 TestKicExistingNetwork 36.4
183 TestKicCustomSubnet 37.65
184 TestKicStaticIP 36.85
185 TestMainNoArgs 0.05
186 TestMinikubeProfile 76.93
189 TestMountStart/serial/StartWithMountFirst 7.52
190 TestMountStart/serial/VerifyMountFirst 0.29
191 TestMountStart/serial/StartWithMountSecond 7.59
192 TestMountStart/serial/VerifyMountSecond 0.29
193 TestMountStart/serial/DeleteFirst 1.7
194 TestMountStart/serial/VerifyMountPostDelete 0.27
195 TestMountStart/serial/Stop 1.24
196 TestMountStart/serial/RestartStopped 8.37
197 TestMountStart/serial/VerifyMountPostStop 0.28
200 TestMultiNode/serial/FreshStart2Nodes 122.29
201 TestMultiNode/serial/DeployApp2Nodes 6.3
203 TestMultiNode/serial/AddNode 51.16
204 TestMultiNode/serial/ProfileList 0.36
205 TestMultiNode/serial/CopyFile 11
206 TestMultiNode/serial/StopNode 2.41
207 TestMultiNode/serial/StartAfterStop 12.69
208 TestMultiNode/serial/RestartKeepsNodes 118.03
209 TestMultiNode/serial/DeleteNode 5.28
210 TestMultiNode/serial/StopMultiNode 24.11
211 TestMultiNode/serial/RestartMultiNode 86.97
212 TestMultiNode/serial/ValidateNameConflict 38.59
219 TestScheduledStopUnix 111.4
222 TestInsufficientStorage 13.4
225 TestKubernetesUpgrade 389.96
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
229 TestNoKubernetes/serial/StartWithK8s 41.95
230 TestNoKubernetes/serial/StartWithStopK8s 6.88
231 TestNoKubernetes/serial/Start 8.63
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
233 TestNoKubernetes/serial/ProfileList 0.63
234 TestNoKubernetes/serial/Stop 1.25
235 TestNoKubernetes/serial/StartNoArgs 7.7
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
237 TestStoppedBinaryUpgrade/Setup 1.33
239 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
248 TestPause/serial/Start 49.98
257 TestNetworkPlugins/group/false 5.82
262 TestStartStop/group/old-k8s-version/serial/FirstStart 122.16
263 TestStartStop/group/old-k8s-version/serial/DeployApp 10.58
264 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
265 TestStartStop/group/old-k8s-version/serial/Stop 12.16
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
267 TestStartStop/group/old-k8s-version/serial/SecondStart 432.87
269 TestStartStop/group/no-preload/serial/FirstStart 70.43
270 TestStartStop/group/no-preload/serial/DeployApp 9.64
271 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
272 TestStartStop/group/no-preload/serial/Stop 12.16
273 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
274 TestStartStop/group/no-preload/serial/SecondStart 628.1
275 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
276 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
277 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.42
278 TestStartStop/group/old-k8s-version/serial/Pause 4.34
280 TestStartStop/group/embed-certs/serial/FirstStart 82.89
281 TestStartStop/group/embed-certs/serial/DeployApp 10.57
282 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
283 TestStartStop/group/embed-certs/serial/Stop 12.27
284 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
285 TestStartStop/group/embed-certs/serial/SecondStart 610.51
286 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
287 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
288 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
289 TestStartStop/group/no-preload/serial/Pause 3.5
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.91
292 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.55
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
296 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 638.72
297 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
298 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
299 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
300 TestStartStop/group/embed-certs/serial/Pause 3.83
302 TestStartStop/group/newest-cni/serial/FirstStart 57.67
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
305 TestStartStop/group/newest-cni/serial/Stop 1.27
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/newest-cni/serial/SecondStart 30.87
308 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
311 TestStartStop/group/newest-cni/serial/Pause 3.24
312 TestNetworkPlugins/group/auto/Start 77.36
313 TestNetworkPlugins/group/auto/KubeletFlags 0.32
314 TestNetworkPlugins/group/auto/NetCatPod 10.37
315 TestNetworkPlugins/group/auto/DNS 0.22
316 TestNetworkPlugins/group/auto/Localhost 0.22
317 TestNetworkPlugins/group/auto/HairPin 0.23
318 TestNetworkPlugins/group/kindnet/Start 82.1
319 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
320 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
321 TestNetworkPlugins/group/kindnet/NetCatPod 10.41
322 TestNetworkPlugins/group/kindnet/DNS 0.22
323 TestNetworkPlugins/group/kindnet/Localhost 0.21
324 TestNetworkPlugins/group/kindnet/HairPin 0.2
325 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
326 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.19
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.6
329 TestNetworkPlugins/group/calico/Start 85.12
330 TestNetworkPlugins/group/custom-flannel/Start 70.31
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.38
333 TestNetworkPlugins/group/calico/ControllerPod 5.04
334 TestNetworkPlugins/group/calico/KubeletFlags 0.3
335 TestNetworkPlugins/group/calico/NetCatPod 11.44
336 TestNetworkPlugins/group/custom-flannel/DNS 0.35
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.3
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.29
339 TestNetworkPlugins/group/calico/DNS 0.33
340 TestNetworkPlugins/group/calico/Localhost 0.33
341 TestNetworkPlugins/group/calico/HairPin 0.41
342 TestNetworkPlugins/group/enable-default-cni/Start 95.17
343 TestNetworkPlugins/group/flannel/Start 70.11
344 TestNetworkPlugins/group/flannel/ControllerPod 5.03
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
346 TestNetworkPlugins/group/flannel/NetCatPod 10.41
347 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
348 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.4
349 TestNetworkPlugins/group/flannel/DNS 0.26
350 TestNetworkPlugins/group/flannel/Localhost 0.24
351 TestNetworkPlugins/group/flannel/HairPin 0.23
352 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
353 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
354 TestNetworkPlugins/group/enable-default-cni/HairPin 0.28
355 TestNetworkPlugins/group/bridge/Start 87.34
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
357 TestNetworkPlugins/group/bridge/NetCatPod 11.35
358 TestNetworkPlugins/group/bridge/DNS 0.21
359 TestNetworkPlugins/group/bridge/Localhost 0.18
360 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (14.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-535520 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-535520 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.402194432s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-535520
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-535520: exit status 85 (78.916276ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-535520 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |          |
	|         | -p download-only-535520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 17:30:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 17:30:32.275280  407818 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:30:32.275497  407818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:32.275524  407818 out.go:309] Setting ErrFile to fd 2...
	I0605 17:30:32.275544  407818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:32.275750  407818 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	W0605 17:30:32.276053  407818 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16634-402421/.minikube/config/config.json: open /home/jenkins/minikube-integration/16634-402421/.minikube/config/config.json: no such file or directory
	I0605 17:30:32.276618  407818 out.go:303] Setting JSON to true
	I0605 17:30:32.277718  407818 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7965,"bootTime":1685978268,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:30:32.277818  407818 start.go:137] virtualization:  
	I0605 17:30:32.281972  407818 out.go:97] [download-only-535520] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:30:32.284146  407818 out.go:169] MINIKUBE_LOCATION=16634
	W0605 17:30:32.282249  407818 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball: no such file or directory
	I0605 17:30:32.282287  407818 notify.go:220] Checking for updates...
	I0605 17:30:32.286997  407818 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:30:32.289568  407818 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:30:32.291829  407818 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:30:32.293784  407818 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0605 17:30:32.297241  407818 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0605 17:30:32.297620  407818 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:30:32.322019  407818 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:30:32.322112  407818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:32.399503  407818 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-06-05 17:30:32.387095986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:32.399602  407818 docker.go:294] overlay module found
	I0605 17:30:32.401451  407818 out.go:97] Using the docker driver based on user configuration
	I0605 17:30:32.401483  407818 start.go:297] selected driver: docker
	I0605 17:30:32.401490  407818 start.go:875] validating driver "docker" against <nil>
	I0605 17:30:32.401607  407818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:32.461687  407818 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-06-05 17:30:32.451816814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:32.461844  407818 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0605 17:30:32.462119  407818 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0605 17:30:32.462289  407818 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0605 17:30:32.464159  407818 out.go:169] Using Docker driver with root privileges
	I0605 17:30:32.465828  407818 cni.go:84] Creating CNI manager for ""
	I0605 17:30:32.465845  407818 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:30:32.465858  407818 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0605 17:30:32.465868  407818 start_flags.go:319] config:
	{Name:download-only-535520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-535520 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:30:32.467717  407818 out.go:97] Starting control plane node download-only-535520 in cluster download-only-535520
	I0605 17:30:32.467840  407818 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:30:32.469565  407818 out.go:97] Pulling base image ...
	I0605 17:30:32.469604  407818 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0605 17:30:32.469649  407818 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:30:32.487028  407818 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f to local cache
	I0605 17:30:32.487212  407818 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory
	I0605 17:30:32.487322  407818 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f to local cache
	I0605 17:30:32.542720  407818 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0605 17:30:32.542744  407818 cache.go:57] Caching tarball of preloaded images
	I0605 17:30:32.542898  407818 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0605 17:30:32.544878  407818 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0605 17:30:32.544919  407818 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:30:32.668009  407818 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0605 17:30:37.484766  407818 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-535520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (10.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-535520 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-535520 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.811938006s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (10.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-535520
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-535520: exit status 85 (70.034496ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-535520 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |          |
	|         | -p download-only-535520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-535520 | jenkins | v1.30.1 | 05 Jun 23 17:30 UTC |          |
	|         | -p download-only-535520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/05 17:30:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0605 17:30:46.755284  407896 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:30:46.755455  407896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:46.755485  407896 out.go:309] Setting ErrFile to fd 2...
	I0605 17:30:46.755506  407896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:30:46.755669  407896 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	W0605 17:30:46.755814  407896 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16634-402421/.minikube/config/config.json: open /home/jenkins/minikube-integration/16634-402421/.minikube/config/config.json: no such file or directory
	I0605 17:30:46.756092  407896 out.go:303] Setting JSON to true
	I0605 17:30:46.757130  407896 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7979,"bootTime":1685978268,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:30:46.757224  407896 start.go:137] virtualization:  
	I0605 17:30:46.759802  407896 out.go:97] [download-only-535520] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:30:46.761914  407896 out.go:169] MINIKUBE_LOCATION=16634
	I0605 17:30:46.760144  407896 notify.go:220] Checking for updates...
	I0605 17:30:46.765904  407896 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:30:46.768235  407896 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:30:46.770469  407896 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:30:46.772179  407896 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0605 17:30:46.775706  407896 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0605 17:30:46.776276  407896 config.go:182] Loaded profile config "download-only-535520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0605 17:30:46.776321  407896 start.go:783] api.Load failed for download-only-535520: filestore "download-only-535520": Docker machine "download-only-535520" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0605 17:30:46.776455  407896 driver.go:375] Setting default libvirt URI to qemu:///system
	W0605 17:30:46.776481  407896 start.go:783] api.Load failed for download-only-535520: filestore "download-only-535520": Docker machine "download-only-535520" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0605 17:30:46.800548  407896 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:30:46.800637  407896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:46.883112  407896 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-05 17:30:46.873553585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:46.883220  407896 docker.go:294] overlay module found
	I0605 17:30:46.884869  407896 out.go:97] Using the docker driver based on existing profile
	I0605 17:30:46.884897  407896 start.go:297] selected driver: docker
	I0605 17:30:46.884904  407896 start.go:875] validating driver "docker" against &{Name:download-only-535520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-535520 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0605 17:30:46.885084  407896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:30:46.949508  407896 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-05 17:30:46.939611278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:30:46.949939  407896 cni.go:84] Creating CNI manager for ""
	I0605 17:30:46.949955  407896 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0605 17:30:46.949964  407896 start_flags.go:319] config:
	{Name:download-only-535520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-535520 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:30:46.952028  407896 out.go:97] Starting control plane node download-only-535520 in cluster download-only-535520
	I0605 17:30:46.952088  407896 cache.go:122] Beginning downloading kic base image for docker with crio
	I0605 17:30:46.953775  407896 out.go:97] Pulling base image ...
	I0605 17:30:46.953805  407896 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:30:46.953968  407896 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local docker daemon
	I0605 17:30:46.970400  407896 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f to local cache
	I0605 17:30:46.970533  407896 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory
	I0605 17:30:46.970554  407896 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f in local cache directory, skipping pull
	I0605 17:30:46.970562  407896 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f exists in cache, skipping pull
	I0605 17:30:46.970570  407896 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f as a tarball
	I0605 17:30:47.020433  407896 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 17:30:47.020471  407896 cache.go:57] Caching tarball of preloaded images
	I0605 17:30:47.021205  407896 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:30:47.023668  407896 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0605 17:30:47.023694  407896 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:30:47.148304  407896 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:47dde9e158811a13dd0ed9ce5ff7e1c2 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4
	I0605 17:30:55.339178  407896 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:30:55.339291  407896 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16634-402421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-arm64.tar.lz4 ...
	I0605 17:30:56.150963  407896 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0605 17:30:56.151113  407896 profile.go:148] Saving config to /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/download-only-535520/config.json ...
	I0605 17:30:56.151339  407896 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0605 17:30:56.151554  407896 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/16634-402421/.minikube/cache/linux/arm64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-535520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-535520
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-444845 --alsologtostderr --binary-mirror http://127.0.0.1:43845 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-444845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-444845
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/Setup (167.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-735995 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-735995 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m47.951228733s)
--- PASS: TestAddons/Setup (167.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mptt6" [4c11cf66-9003-4cc0-a051-4c949bbae334] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008454531s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-735995
2023/06/05 17:34:05 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2023/06/05 17:34:05 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-735995: (5.608788355s)
--- PASS: TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.586843ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-66p4n" [da2b3efb-e47f-430a-b8c7-e9c926140c32] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010818333s
addons_test.go:391: (dbg) Run:  kubectl --context addons-735995 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 20.880574ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-735995 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-735995 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [76fe8aca-51e1-4271-801c-18092a5f60ab] Pending
helpers_test.go:344: "task-pv-pod" [76fe8aca-51e1-4271-801c-18092a5f60ab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [76fe8aca-51e1-4271-801c-18092a5f60ab] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009776871s
addons_test.go:560: (dbg) Run:  kubectl --context addons-735995 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-735995 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-735995 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-735995 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-735995 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735995 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-735995 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0d7e459c-f863-4193-af6a-b12e80eb5a7a] Pending
helpers_test.go:344: "task-pv-pod-restore" [0d7e459c-f863-4193-af6a-b12e80eb5a7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0d7e459c-f863-4193-af6a-b12e80eb5a7a] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.01468044s
addons_test.go:602: (dbg) Run:  kubectl --context addons-735995 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-735995 delete pod task-pv-pod-restore: (1.099440917s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-735995 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-735995 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-735995 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.593765264s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-735995 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-735995 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-wfclp" [56423ab6-0abb-4c3d-a284-526ff1a93b65] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-wfclp" [56423ab6-0abb-4c3d-a284-526ff1a93b65] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00797355s
--- PASS: TestAddons/parallel/Headlamp (12.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6964794569-92prd" [bd250baa-6e85-4803-92cd-d35991e1692d] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.020028844s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-735995
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-735995 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-735995 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-735995
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-735995: (12.067341224s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-735995
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-735995
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-735995
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (35.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-578334 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0605 18:16:49.165675  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-578334 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.161327642s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-578334 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-578334 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-578334 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-578334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-578334
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-578334: (2.014572605s)
--- PASS: TestCertOptions (35.88s)

                                                
                                    
x
+
TestCertExpiration (244.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-457848 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-457848 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.948810543s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-457848 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-457848 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.413879882s)
helpers_test.go:175: Cleaning up "cert-expiration-457848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-457848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-457848: (2.428488614s)
--- PASS: TestCertExpiration (244.79s)

                                                
                                    
x
+
TestForceSystemdFlag (47.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-300073 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-300073 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.989780598s)
docker_test.go:126: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-300073 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-300073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-300073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-300073: (2.90724009s)
--- PASS: TestForceSystemdFlag (47.30s)

                                                
                                    
x
+
TestForceSystemdEnv (44.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-205099 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:149: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-205099 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.138178996s)
helpers_test.go:175: Cleaning up "force-systemd-env-205099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-205099
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-205099: (2.74194504s)
--- PASS: TestForceSystemdEnv (44.88s)

                                                
                                    
x
+
TestErrorSpam/setup (34.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-865991 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-865991 --driver=docker  --container-runtime=crio
E0605 17:38:47.235529  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.243875  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.254243  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.274720  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.314957  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.395167  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.555417  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:47.875975  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:48.516860  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:38:49.797066  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-865991 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-865991 --driver=docker  --container-runtime=crio: (34.146361799s)
--- PASS: TestErrorSpam/setup (34.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 start --dry-run
E0605 17:38:52.358031  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 unpause
E0605 17:38:57.479182  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 stop: (1.280836145s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-865991 --log_dir /tmp/nospam-865991 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16634-402421/.minikube/files/etc/test/nested/copy/407813/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-arm64 start -p functional-083977 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0605 17:39:07.719361  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:39:28.199588  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:40:09.160059  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-arm64 start -p functional-083977 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.80836276s)
--- PASS: TestFunctional/serial/StartWithProxy (76.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-arm64 start -p functional-083977 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-arm64 start -p functional-083977 --alsologtostderr -v=8: (42.29645494s)
functional_test.go:658: soft start took 42.296932963s for "functional-083977" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-083977 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 cache add registry.k8s.io/pause:3.1: (1.389391491s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 cache add registry.k8s.io/pause:3.3: (1.419040317s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 cache add registry.k8s.io/pause:latest: (1.377029266s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-083977 /tmp/TestFunctionalserialCacheCmdcacheadd_local2420202627/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cache add minikube-local-cache-test:functional-083977
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cache delete minikube-local-cache-test:functional-083977
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-083977
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (325.127697ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 cache reload: (1.277840125s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 kubectl -- --context functional-083977 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-083977 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-arm64 start -p functional-083977 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0605 17:41:31.080285  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-arm64 start -p functional-083977 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.185506145s)
functional_test.go:756: restart took 32.185627991s for "functional-083977" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-083977 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 logs: (2.051905004s)
--- PASS: TestFunctional/serial/LogsCmd (2.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 logs --file /tmp/TestFunctionalserialLogsFileCmd3252111302/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 logs --file /tmp/TestFunctionalserialLogsFileCmd3252111302/001/logs.txt: (1.928217723s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 config get cpus: exit status 14 (88.977446ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 config get cpus: exit status 14 (59.34474ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-083977 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-083977 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 432665: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-arm64 start -p functional-083977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-083977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.84812ms)

                                                
                                                
-- stdout --
	* [functional-083977] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 17:42:20.588542  432422 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:42:20.589035  432422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:42:20.589046  432422 out.go:309] Setting ErrFile to fd 2...
	I0605 17:42:20.589052  432422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:42:20.589314  432422 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:42:20.589754  432422 out.go:303] Setting JSON to false
	I0605 17:42:20.590772  432422 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8673,"bootTime":1685978268,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:42:20.590852  432422 start.go:137] virtualization:  
	I0605 17:42:20.593438  432422 out.go:177] * [functional-083977] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 17:42:20.595146  432422 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 17:42:20.596832  432422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:42:20.595273  432422 notify.go:220] Checking for updates...
	I0605 17:42:20.601351  432422 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:42:20.603203  432422 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:42:20.604993  432422 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 17:42:20.607101  432422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 17:42:20.609394  432422 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:42:20.609962  432422 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:42:20.634024  432422 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:42:20.634120  432422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:42:20.721780  432422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-06-05 17:42:20.711945848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:42:20.721880  432422 docker.go:294] overlay module found
	I0605 17:42:20.724500  432422 out.go:177] * Using the docker driver based on existing profile
	I0605 17:42:20.726437  432422 start.go:297] selected driver: docker
	I0605 17:42:20.726454  432422 start.go:875] validating driver "docker" against &{Name:functional-083977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-083977 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:42:20.726565  432422 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 17:42:20.729298  432422 out.go:177] 
	W0605 17:42:20.731073  432422 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0605 17:42:20.733052  432422 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-arm64 start -p functional-083977 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-arm64 start -p functional-083977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-083977 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.230182ms)

                                                
                                                
-- stdout --
	* [functional-083977] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 17:42:20.375282  432379 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:42:20.375414  432379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:42:20.375424  432379 out.go:309] Setting ErrFile to fd 2...
	I0605 17:42:20.375430  432379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:42:20.375671  432379 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:42:20.376077  432379 out.go:303] Setting JSON to false
	I0605 17:42:20.377097  432379 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8673,"bootTime":1685978268,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 17:42:20.377165  432379 start.go:137] virtualization:  
	I0605 17:42:20.380378  432379 out.go:177] * [functional-083977] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	I0605 17:42:20.384747  432379 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 17:42:20.384836  432379 notify.go:220] Checking for updates...
	I0605 17:42:20.390637  432379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 17:42:20.393497  432379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 17:42:20.397804  432379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 17:42:20.400568  432379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 17:42:20.403396  432379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 17:42:20.406392  432379 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:42:20.406960  432379 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 17:42:20.431584  432379 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 17:42:20.431705  432379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:42:20.524432  432379 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-06-05 17:42:20.51450777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:42:20.524584  432379 docker.go:294] overlay module found
	I0605 17:42:20.526874  432379 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0605 17:42:20.528884  432379 start.go:297] selected driver: docker
	I0605 17:42:20.528903  432379 start.go:875] validating driver "docker" against &{Name:functional-083977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685959646-16634@sha256:197493132bd1eebb52a0757acb46d15a50bd5dc673e369e5145876a5268a6a6f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-083977 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0605 17:42:20.529013  432379 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 17:42:20.531623  432379 out.go:177] 
	W0605 17:42:20.533432  432379 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0605 17:42:20.535512  432379 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-083977 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-083977 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-m8qr8" [ff0ce8c5-864e-46bc-b747-c81fe3d7ce82] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-m8qr8" [ff0ce8c5-864e-46bc-b747-c81fe3d7ce82] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.012441907s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:30839
functional_test.go:1673: http://192.168.49.2:30839: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-m8qr8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30839
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1b10b0ef-83c6-40a4-acb4-feb4689ec19f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025388154s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-083977 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-083977 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-083977 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-083977 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6d965fa4-f747-401f-b376-45d725b5d0e4] Pending
helpers_test.go:344: "sp-pod" [6d965fa4-f747-401f-b376-45d725b5d0e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6d965fa4-f747-401f-b376-45d725b5d0e4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.008137025s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-083977 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-083977 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-083977 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a863b578-52b4-4ba9-b934-034d349623e6] Pending
helpers_test.go:344: "sp-pod" [a863b578-52b4-4ba9-b934-034d349623e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a863b578-52b4-4ba9-b934-034d349623e6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00955034s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-083977 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh -n functional-083977 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 cp functional-083977:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd737301076/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh -n functional-083977 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/407813/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /etc/test/nested/copy/407813/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/407813.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /etc/ssl/certs/407813.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/407813.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /usr/share/ca-certificates/407813.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/4078132.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /etc/ssl/certs/4078132.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/4078132.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /usr/share/ca-certificates/4078132.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-083977 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh "sudo systemctl is-active docker": exit status 1 (404.312851ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo systemctl is-active containerd"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh "sudo systemctl is-active containerd": exit status 1 (444.994865ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-083977 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-083977 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-083977 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-083977 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 430533: os: process already finished
helpers_test.go:508: unable to kill pid 430375: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-083977 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-083977 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a398bc57-baba-4d24-ba61-83cd6055a8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a398bc57-baba-4d24-ba61-83cd6055a8b5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.018077609s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-083977 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.52.203 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-083977 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-083977 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-083977 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-bsq22" [4db407fa-595b-4a61-8de4-6885ff6b5392] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-bsq22" [4db407fa-595b-4a61-8de4-6885ff6b5392] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.015521317s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1313: Took "349.156708ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1327: Took "58.303873ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1364: Took "390.100359ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1377: Took "58.507893ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdany-port2158269388/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1685986935444105115" to /tmp/TestFunctionalparallelMountCmdany-port2158269388/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1685986935444105115" to /tmp/TestFunctionalparallelMountCmdany-port2158269388/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1685986935444105115" to /tmp/TestFunctionalparallelMountCmdany-port2158269388/001/test-1685986935444105115
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.029723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  5 17:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  5 17:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  5 17:42 test-1685986935444105115
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh cat /mount-9p/test-1685986935444105115
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-083977 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bc340c04-c874-4f8a-a051-b413f5b19fd3] Pending
helpers_test.go:344: "busybox-mount" [bc340c04-c874-4f8a-a051-b413f5b19fd3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bc340c04-c874-4f8a-a051-b413f5b19fd3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bc340c04-c874-4f8a-a051-b413f5b19fd3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.02206337s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-083977 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdany-port2158269388/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 service list -o json
functional_test.go:1492: Took "666.479853ms" to run "out/minikube-linux-arm64 -p functional-083977 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:31309
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:31309
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdspecific-port3118369197/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (657.037056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdspecific-port3118369197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh "sudo umount -f /mount-9p": exit status 1 (423.224008ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-083977 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdspecific-port3118369197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdVerifyCleanup220998496/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdVerifyCleanup220998496/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdVerifyCleanup220998496/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T" /mount1: (1.178057419s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-083977 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdVerifyCleanup220998496/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdVerifyCleanup220998496/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-083977 /tmp/TestFunctionalparallelMountCmdVerifyCleanup220998496/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-083977 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-083977
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-083977 image ls --format short --alsologtostderr:
I0605 17:42:52.139229  434935 out.go:296] Setting OutFile to fd 1 ...
I0605 17:42:52.139455  434935 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.139483  434935 out.go:309] Setting ErrFile to fd 2...
I0605 17:42:52.139503  434935 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.139703  434935 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
I0605 17:42:52.140525  434935 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.140704  434935 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.141271  434935 cli_runner.go:164] Run: docker container inspect functional-083977 --format={{.State.Status}}
I0605 17:42:52.169066  434935 ssh_runner.go:195] Run: systemctl --version
I0605 17:42:52.169124  434935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-083977
I0605 17:42:52.197814  434935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/functional-083977/id_rsa Username:docker}
I0605 17:42:52.295272  434935 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-083977 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| docker.io/library/nginx                 | alpine             | 5ee47dcca7543 | 42.8MB |
| docker.io/library/nginx                 | latest             | c42efe0b54387 | 140MB  |
| gcr.io/google-containers/addon-resizer  | functional-083977  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
| registry.k8s.io/kube-proxy              | v1.27.2            | 29921a0845422 | 68.1MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-apiserver          | v1.27.2            | 72c9df6be7f1b | 116MB  |
| registry.k8s.io/kube-controller-manager | v1.27.2            | 2ee705380c3c5 | 109MB  |
| registry.k8s.io/kube-scheduler          | v1.27.2            | 305d7ed1dae28 | 57.6MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-083977 image ls --format table --alsologtostderr:
I0605 17:42:52.805727  435070 out.go:296] Setting OutFile to fd 1 ...
I0605 17:42:52.805944  435070 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.805969  435070 out.go:309] Setting ErrFile to fd 2...
I0605 17:42:52.805988  435070 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.806164  435070 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
I0605 17:42:52.806816  435070 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.806983  435070 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.807498  435070 cli_runner.go:164] Run: docker container inspect functional-083977 --format={{.State.Status}}
I0605 17:42:52.833579  435070 ssh_runner.go:195] Run: systemctl --version
I0605 17:42:52.833637  435070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-083977
I0605 17:42:52.872406  435070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/functional-083977/id_rsa Username:docker}
I0605 17:42:52.974838  435070 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-083977 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9de
a45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":["registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"116138960"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce
206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-083977"],"size":"34114467"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e002
13b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"108667702"},{"id":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f","registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"68099991"},{"id":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177","registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"57615158"},{"id":"5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad","repoDigests":["docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328","docker.io/libr
ary/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90"],"repoTags":["docker.io/library/nginx:alpine"],"size":"42810437"},{"id":"c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c","repoDigests":["docker.io/library/nginx@sha256:0bb91b50c42bc6677acff40ea0f050b655c5c2cc1311e783097a04061191340b","docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305"],"repoTags":["docker.io/library/nginx:latest"],"size":"139751562"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],
"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb209
1f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-083977 image ls --format json --alsologtostderr:
I0605 17:42:52.493559  434994 out.go:296] Setting OutFile to fd 1 ...
I0605 17:42:52.493745  434994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.493771  434994 out.go:309] Setting ErrFile to fd 2...
I0605 17:42:52.493789  434994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.493984  434994 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
I0605 17:42:52.494627  434994 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.494838  434994 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.495418  434994 cli_runner.go:164] Run: docker container inspect functional-083977 --format={{.State.Status}}
I0605 17:42:52.524104  434994 ssh_runner.go:195] Run: systemctl --version
I0605 17:42:52.524167  434994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-083977
I0605 17:42:52.568111  434994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/functional-083977/id_rsa Username:docker}
I0605 17:42:52.666789  434994 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-arm64 -p functional-083977 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
- registry.k8s.io/kube-scheduler@sha256:e0ecd0ce2447789a58ad5e94acda2cff8ad4e6ca3ccc06041b89e7eb0b78a6c4
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "57615158"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad
repoDigests:
- docker.io/library/nginx@sha256:203cba3f56d7dba1d66b95c091db65a4f0778eb5d16e76151e73e0413e317328
- docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90
repoTags:
- docker.io/library/nginx:alpine
size: "42810437"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6626c27b7df41d86340a701121792c5c0dc40ca8877c23478fc5659103bc7505
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "108667702"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
- registry.k8s.io/kube-proxy@sha256:7ebc3b4df29c385197555a543c4a3379cfcdabdfbe37e2b2ea3ceac87ce28bca
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "68099991"
- id: c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c
repoDigests:
- docker.io/library/nginx@sha256:0bb91b50c42bc6677acff40ea0f050b655c5c2cc1311e783097a04061191340b
- docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
repoTags:
- docker.io/library/nginx:latest
size: "139751562"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-083977
size: "34114467"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: 72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:599c991fe774036dff5f54b3113290d83da173d7627ea259bd2a3064eaa7987e
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "116138960"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-arm64 -p functional-083977 image ls --format yaml --alsologtostderr:
I0605 17:42:52.146971  434934 out.go:296] Setting OutFile to fd 1 ...
I0605 17:42:52.147218  434934 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.147230  434934 out.go:309] Setting ErrFile to fd 2...
I0605 17:42:52.147246  434934 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.147423  434934 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
I0605 17:42:52.148542  434934 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.148690  434934 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.149235  434934 cli_runner.go:164] Run: docker container inspect functional-083977 --format={{.State.Status}}
I0605 17:42:52.183901  434934 ssh_runner.go:195] Run: systemctl --version
I0605 17:42:52.183977  434934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-083977
I0605 17:42:52.218214  434934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/functional-083977/id_rsa Username:docker}
I0605 17:42:52.327674  434934 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-083977 ssh pgrep buildkitd: exit status 1 (356.961305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image build -t localhost/my-image:functional-083977 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 image build -t localhost/my-image:functional-083977 testdata/build --alsologtostderr: (2.517597901s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-arm64 -p functional-083977 image build -t localhost/my-image:functional-083977 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f98d0628593
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-083977
--> 029aba2737e
Successfully tagged localhost/my-image:functional-083977
029aba2737eb4342c5a123f465780a6854960370777d6e9b1bd55002eda841aa
functional_test.go:321: (dbg) Stderr: out/minikube-linux-arm64 -p functional-083977 image build -t localhost/my-image:functional-083977 testdata/build --alsologtostderr:
I0605 17:42:52.799499  435068 out.go:296] Setting OutFile to fd 1 ...
I0605 17:42:52.800350  435068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.800367  435068 out.go:309] Setting ErrFile to fd 2...
I0605 17:42:52.800374  435068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0605 17:42:52.800537  435068 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
I0605 17:42:52.801197  435068 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.802110  435068 config.go:182] Loaded profile config "functional-083977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0605 17:42:52.802687  435068 cli_runner.go:164] Run: docker container inspect functional-083977 --format={{.State.Status}}
I0605 17:42:52.825383  435068 ssh_runner.go:195] Run: systemctl --version
I0605 17:42:52.825441  435068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-083977
I0605 17:42:52.849747  435068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/functional-083977/id_rsa Username:docker}
I0605 17:42:52.954158  435068 build_images.go:151] Building image from path: /tmp/build.3927636259.tar
I0605 17:42:52.954228  435068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0605 17:42:52.966118  435068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3927636259.tar
I0605 17:42:52.972587  435068 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3927636259.tar: stat -c "%s %y" /var/lib/minikube/build/build.3927636259.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3927636259.tar': No such file or directory
I0605 17:42:52.972626  435068 ssh_runner.go:362] scp /tmp/build.3927636259.tar --> /var/lib/minikube/build/build.3927636259.tar (3072 bytes)
I0605 17:42:53.009580  435068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3927636259
I0605 17:42:53.022108  435068 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3927636259 -xf /var/lib/minikube/build/build.3927636259.tar
I0605 17:42:53.041887  435068 crio.go:297] Building image: /var/lib/minikube/build/build.3927636259
I0605 17:42:53.041952  435068 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-083977 /var/lib/minikube/build/build.3927636259 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0605 17:42:55.211111  435068 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-083977 /var/lib/minikube/build/build.3927636259 --cgroup-manager=cgroupfs: (2.169130109s)
I0605 17:42:55.211181  435068 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3927636259
I0605 17:42:55.223441  435068 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3927636259.tar
I0605 17:42:55.234350  435068 build_images.go:207] Built localhost/my-image:functional-083977 from /tmp/build.3927636259.tar
I0605 17:42:55.234385  435068 build_images.go:123] succeeded building to: functional-083977
I0605 17:42:55.234391  435068 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.919776116s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-083977
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image load --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr
2023/06/05 17:42:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:353: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 image load --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr: (5.245962803s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image load --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 image load --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr: (2.841030382s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.858924438s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-083977
functional_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image load --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 image load --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr: (3.928252477s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image save gcr.io/google-containers/addon-resizer:functional-083977 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image rm gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.050024266s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-083977
functional_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p functional-083977 image save --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p functional-083977 image save --daemon gcr.io/google-containers/addon-resizer:functional-083977 --alsologtostderr: (2.721425622s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-083977
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.78s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-083977
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-083977
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-083977
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-980425 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0605 17:43:47.236308  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:44:14.921136  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-980425 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m39.888390195s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons enable ingress --alsologtostderr -v=5: (11.809249423s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-980425 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.46s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-550633 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0605 17:48:11.087134  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:48:47.236342  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-550633 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.846124223s)
--- PASS: TestJSONOutput/start/Command (75.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-550633 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-550633 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-550633 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-550633 --output=json --user=testUser: (5.913707418s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-088176 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-088176 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.781523ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3435d02c-aaa9-4fa9-a76d-2abd782da133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-088176] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"47ac5cba-73ab-4936-841c-00933ae34cce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16634"}}
	{"specversion":"1.0","id":"26663e51-f8e0-4e0b-a19e-f96363c4c441","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0174e576-c30d-4fa4-9d3f-60a4744a30c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig"}}
	{"specversion":"1.0","id":"76727d7f-25ce-4177-a71a-d5b4fe1b444c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube"}}
	{"specversion":"1.0","id":"f0737660-8b0c-4dfc-9cb9-4d967ba3c470","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5807159d-442b-4054-8843-8ca3fbd5c9ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b1510bd7-c8ed-4fd7-b3f8-222dc266d360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-088176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-088176
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-665445 --network=
E0605 17:49:33.007963  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 17:49:50.702508  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:50.709521  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:50.720617  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:50.741264  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:50.781932  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:50.862643  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:51.023427  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:51.344282  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:51.985680  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:53.265890  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:49:55.826087  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:50:00.946987  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-665445 --network=: (42.240660934s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-665445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-665445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-665445: (2.088029694s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-605060 --network=bridge
E0605 17:50:11.187119  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:50:31.667911  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-605060 --network=bridge: (35.33132712s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-605060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-605060
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-605060: (1.987478341s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.34s)

                                                
                                    
x
+
TestKicExistingNetwork (36.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-137671 --network=existing-network
E0605 17:51:12.628134  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-137671 --network=existing-network: (34.210376009s)
helpers_test.go:175: Cleaning up "existing-network-137671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-137671
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-137671: (2.028328148s)
--- PASS: TestKicExistingNetwork (36.40s)

                                                
                                    
x
+
TestKicCustomSubnet (37.65s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-248697 --subnet=192.168.60.0/24
E0605 17:51:49.166480  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-248697 --subnet=192.168.60.0/24: (35.516485284s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-248697 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-248697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-248697
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-248697: (2.104846125s)
--- PASS: TestKicCustomSubnet (37.65s)

                                                
                                    
x
+
TestKicStaticIP (36.85s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-915341 --static-ip=192.168.200.200
E0605 17:52:16.848223  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-915341 --static-ip=192.168.200.200: (34.597807989s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-915341 ip
helpers_test.go:175: Cleaning up "static-ip-915341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-915341
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-915341: (2.09490362s)
--- PASS: TestKicStaticIP (36.85s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-672807 --driver=docker  --container-runtime=crio
E0605 17:52:34.548353  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-672807 --driver=docker  --container-runtime=crio: (34.376341088s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-675876 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-675876 --driver=docker  --container-runtime=crio: (37.112446264s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-672807
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-675876
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-675876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-675876
E0605 17:53:47.235812  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-675876: (2.087745196s)
helpers_test.go:175: Cleaning up "first-672807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-672807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-672807: (2.031911878s)
--- PASS: TestMinikubeProfile (76.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-136395 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-136395 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.523198629s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-136395 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-138151 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-138151 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.584811072s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-138151 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-136395 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-136395 --alsologtostderr -v=5: (1.700704986s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-138151 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-138151
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-138151: (1.241994023s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-138151
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-138151: (7.373277999s)
--- PASS: TestMountStart/serial/RestartStopped (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-138151 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-292850 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0605 17:54:50.697973  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 17:55:10.281683  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 17:55:18.389128  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-292850 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.722941034s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-292850 -- rollout status deployment/busybox: (4.087600734s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-8g86r -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-mtn99 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-8g86r -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-mtn99 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-8g86r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-292850 -- exec busybox-67b7f59bb-mtn99 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-292850 -v 3 --alsologtostderr
E0605 17:56:49.166310  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-292850 -v 3 --alsologtostderr: (50.436564767s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp testdata/cp-test.txt multinode-292850:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3159588763/001/cp-test_multinode-292850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850:/home/docker/cp-test.txt multinode-292850-m02:/home/docker/cp-test_multinode-292850_multinode-292850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m02 "sudo cat /home/docker/cp-test_multinode-292850_multinode-292850-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850:/home/docker/cp-test.txt multinode-292850-m03:/home/docker/cp-test_multinode-292850_multinode-292850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m03 "sudo cat /home/docker/cp-test_multinode-292850_multinode-292850-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp testdata/cp-test.txt multinode-292850-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3159588763/001/cp-test_multinode-292850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850-m02:/home/docker/cp-test.txt multinode-292850:/home/docker/cp-test_multinode-292850-m02_multinode-292850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850 "sudo cat /home/docker/cp-test_multinode-292850-m02_multinode-292850.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850-m02:/home/docker/cp-test.txt multinode-292850-m03:/home/docker/cp-test_multinode-292850-m02_multinode-292850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m03 "sudo cat /home/docker/cp-test_multinode-292850-m02_multinode-292850-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp testdata/cp-test.txt multinode-292850-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3159588763/001/cp-test_multinode-292850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850-m03:/home/docker/cp-test.txt multinode-292850:/home/docker/cp-test_multinode-292850-m03_multinode-292850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850 "sudo cat /home/docker/cp-test_multinode-292850-m03_multinode-292850.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 cp multinode-292850-m03:/home/docker/cp-test.txt multinode-292850-m02:/home/docker/cp-test_multinode-292850-m03_multinode-292850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 ssh -n multinode-292850-m02 "sudo cat /home/docker/cp-test_multinode-292850-m03_multinode-292850-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-292850 node stop m03: (1.246540002s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-292850 status: exit status 7 (578.406187ms)

                                                
                                                
-- stdout --
	multinode-292850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-292850-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-292850-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr: exit status 7 (588.373105ms)

                                                
                                                
-- stdout --
	multinode-292850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-292850-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-292850-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 17:57:38.384891  481481 out.go:296] Setting OutFile to fd 1 ...
	I0605 17:57:38.385131  481481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:57:38.385160  481481 out.go:309] Setting ErrFile to fd 2...
	I0605 17:57:38.385179  481481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 17:57:38.385373  481481 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 17:57:38.385601  481481 out.go:303] Setting JSON to false
	I0605 17:57:38.385667  481481 mustload.go:65] Loading cluster: multinode-292850
	I0605 17:57:38.385777  481481 notify.go:220] Checking for updates...
	I0605 17:57:38.386167  481481 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 17:57:38.386205  481481 status.go:255] checking status of multinode-292850 ...
	I0605 17:57:38.388188  481481 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 17:57:38.415027  481481 status.go:330] multinode-292850 host status = "Running" (err=<nil>)
	I0605 17:57:38.415068  481481 host.go:66] Checking if "multinode-292850" exists ...
	I0605 17:57:38.415369  481481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850
	I0605 17:57:38.442259  481481 host.go:66] Checking if "multinode-292850" exists ...
	I0605 17:57:38.442558  481481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:57:38.442607  481481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850
	I0605 17:57:38.469683  481481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850/id_rsa Username:docker}
	I0605 17:57:38.567393  481481 ssh_runner.go:195] Run: systemctl --version
	I0605 17:57:38.574253  481481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:57:38.588489  481481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 17:57:38.676579  481481 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-06-05 17:57:38.665697593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 17:57:38.677247  481481 kubeconfig.go:92] found "multinode-292850" server: "https://192.168.58.2:8443"
	I0605 17:57:38.677275  481481 api_server.go:166] Checking apiserver status ...
	I0605 17:57:38.677322  481481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0605 17:57:38.691376  481481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	I0605 17:57:38.703421  481481 api_server.go:182] apiserver freezer: "7:freezer:/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-ac15399dd51b259210446c46102e21066721c47497e13148b4b10a3c37058b3d"
	I0605 17:57:38.703501  481481 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ae5a1ee2c03ea8a290bb7f74ce89f89769559c8f98f93adb8f0bc3793267ef47/crio/crio-ac15399dd51b259210446c46102e21066721c47497e13148b4b10a3c37058b3d/freezer.state
	I0605 17:57:38.714588  481481 api_server.go:204] freezer state: "THAWED"
	I0605 17:57:38.714614  481481 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0605 17:57:38.723623  481481 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0605 17:57:38.723651  481481 status.go:421] multinode-292850 apiserver status = Running (err=<nil>)
	I0605 17:57:38.723676  481481 status.go:257] multinode-292850 status: &{Name:multinode-292850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 17:57:38.723697  481481 status.go:255] checking status of multinode-292850-m02 ...
	I0605 17:57:38.724203  481481 cli_runner.go:164] Run: docker container inspect multinode-292850-m02 --format={{.State.Status}}
	I0605 17:57:38.747030  481481 status.go:330] multinode-292850-m02 host status = "Running" (err=<nil>)
	I0605 17:57:38.747061  481481 host.go:66] Checking if "multinode-292850-m02" exists ...
	I0605 17:57:38.747346  481481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-292850-m02
	I0605 17:57:38.766946  481481 host.go:66] Checking if "multinode-292850-m02" exists ...
	I0605 17:57:38.767285  481481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0605 17:57:38.767332  481481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-292850-m02
	I0605 17:57:38.787185  481481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/16634-402421/.minikube/machines/multinode-292850-m02/id_rsa Username:docker}
	I0605 17:57:38.882830  481481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0605 17:57:38.898726  481481 status.go:257] multinode-292850-m02 status: &{Name:multinode-292850-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0605 17:57:38.898760  481481 status.go:255] checking status of multinode-292850-m03 ...
	I0605 17:57:38.899075  481481 cli_runner.go:164] Run: docker container inspect multinode-292850-m03 --format={{.State.Status}}
	I0605 17:57:38.919894  481481 status.go:330] multinode-292850-m03 host status = "Stopped" (err=<nil>)
	I0605 17:57:38.919950  481481 status.go:343] host is not running, skipping remaining checks
	I0605 17:57:38.919959  481481 status.go:257] multinode-292850-m03 status: &{Name:multinode-292850-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-292850 node start m03 --alsologtostderr: (11.798951717s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (118.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-292850
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-292850
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-292850: (25.106645468s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-292850 --wait=true -v=8 --alsologtostderr
E0605 17:58:47.236210  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-292850 --wait=true -v=8 --alsologtostderr: (1m32.782337974s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-292850
--- PASS: TestMultiNode/serial/RestartKeepsNodes (118.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 node delete m03
E0605 17:59:50.697982  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-292850 node delete m03: (4.465189013s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-292850 stop: (23.916279465s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-292850 status: exit status 7 (97.781381ms)

                                                
                                                
-- stdout --
	multinode-292850
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-292850-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr: exit status 7 (92.029071ms)

                                                
                                                
-- stdout --
	multinode-292850
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-292850-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:00:18.988555  489581 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:00:18.988681  489581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:00:18.988691  489581 out.go:309] Setting ErrFile to fd 2...
	I0605 18:00:18.988697  489581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:00:18.989129  489581 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:00:18.989481  489581 out.go:303] Setting JSON to false
	I0605 18:00:18.989519  489581 mustload.go:65] Loading cluster: multinode-292850
	I0605 18:00:18.990281  489581 config.go:182] Loaded profile config "multinode-292850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:00:18.990317  489581 status.go:255] checking status of multinode-292850 ...
	I0605 18:00:18.991119  489581 cli_runner.go:164] Run: docker container inspect multinode-292850 --format={{.State.Status}}
	I0605 18:00:18.991140  489581 notify.go:220] Checking for updates...
	I0605 18:00:19.012033  489581 status.go:330] multinode-292850 host status = "Stopped" (err=<nil>)
	I0605 18:00:19.012054  489581 status.go:343] host is not running, skipping remaining checks
	I0605 18:00:19.012061  489581 status.go:257] multinode-292850 status: &{Name:multinode-292850 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0605 18:00:19.012090  489581 status.go:255] checking status of multinode-292850-m02 ...
	I0605 18:00:19.012407  489581 cli_runner.go:164] Run: docker container inspect multinode-292850-m02 --format={{.State.Status}}
	I0605 18:00:19.031020  489581 status.go:330] multinode-292850-m02 host status = "Stopped" (err=<nil>)
	I0605 18:00:19.031043  489581 status.go:343] host is not running, skipping remaining checks
	I0605 18:00:19.031050  489581 status.go:257] multinode-292850-m02 status: &{Name:multinode-292850-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-292850 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-292850 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m26.089479051s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-292850 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-292850
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-292850-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-292850-m02 --driver=docker  --container-runtime=crio: exit status 14 (82.428337ms)

                                                
                                                
-- stdout --
	* [multinode-292850-m02] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-292850-m02' is duplicated with machine name 'multinode-292850-m02' in profile 'multinode-292850'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-292850-m03 --driver=docker  --container-runtime=crio
E0605 18:01:49.166519  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-292850-m03 --driver=docker  --container-runtime=crio: (35.669242051s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-292850
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-292850: exit status 80 (707.32871ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-292850
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-292850-m03 already exists in multinode-292850-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-292850-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-292850-m03: (2.070330492s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.59s)

                                                
                                    
x
+
TestScheduledStopUnix (111.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-639332 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-639332 --memory=2048 --driver=docker  --container-runtime=crio: (34.856408899s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-639332 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-639332 -n scheduled-stop-639332
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-639332 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-639332 --cancel-scheduled
E0605 18:06:13.749752  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-639332 -n scheduled-stop-639332
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-639332
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-639332 --schedule 15s
E0605 18:06:49.166967  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-639332
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-639332: exit status 7 (75.041816ms)

                                                
                                                
-- stdout --
	scheduled-stop-639332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-639332 -n scheduled-stop-639332
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-639332 -n scheduled-stop-639332: exit status 7 (80.506988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-639332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-639332
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-639332: (4.815093291s)
--- PASS: TestScheduledStopUnix (111.40s)

                                                
                                    
x
+
TestInsufficientStorage (13.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-035590 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-035590 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.827942481s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fa27851-bebc-4d25-a52b-1b21120c73c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-035590] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09a026a0-3edb-417f-88d4-25b90a785978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16634"}}
	{"specversion":"1.0","id":"009bf856-5fc0-450b-afe3-40e2f08c3cf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75e0f742-947b-4692-91ee-511f7ec2aef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig"}}
	{"specversion":"1.0","id":"67221ed9-6ae9-4dc5-a541-4eade8787494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube"}}
	{"specversion":"1.0","id":"515327c6-4db5-4aa1-9418-7a504b32463f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d88b81d1-8dac-4db9-b597-9b6edbe3a661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"52f63ac2-2379-42b7-aecc-39cb28e3603b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3b542c06-17aa-47a5-83ca-a6d9411e1747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"40496196-0c4b-4534-b537-fb87d6610706","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e305ee4-2567-4cfd-ac52-af4bf9240d74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5cf40233-70fc-4be3-ac8a-4e60d359341b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-035590 in cluster insufficient-storage-035590","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"74689338-7772-40a9-8f2a-e83209b2ba47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f61f980b-a999-400e-b0bc-9c9307f2caed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a584f416-3b03-43d0-bac1-7b4727aefb0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-035590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-035590 --output=json --layout=cluster: exit status 7 (318.244531ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0605 18:07:37.132401  506802 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-035590" does not appear in /home/jenkins/minikube-integration/16634-402421/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-035590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-035590 --output=json --layout=cluster: exit status 7 (320.807185ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-035590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0605 18:07:37.457540  506856 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-035590" does not appear in /home/jenkins/minikube-integration/16634-402421/kubeconfig
	E0605 18:07:37.470380  506856 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/insufficient-storage-035590/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-035590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-035590
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-035590: (1.933906518s)
--- PASS: TestInsufficientStorage (13.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.831205479s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-987814
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-987814: (1.419659426s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-987814 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-987814 status --format={{.Host}}: exit status 7 (121.085714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.614411132s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-987814 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (113.427535ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-987814] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-987814
	    minikube start -p kubernetes-upgrade-987814 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9878142 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-987814 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-987814 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.082344168s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-987814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-987814
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-987814: (2.5847181s)
--- PASS: TestKubernetesUpgrade (389.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063572 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-063572 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (73.727554ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-063572] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063572 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063572 --driver=docker  --container-runtime=crio: (41.504565014s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-063572 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063572 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063572 --no-kubernetes --driver=docker  --container-runtime=crio: (4.488408312s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-063572 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-063572 status -o json: exit status 2 (338.327916ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-063572","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-063572
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-063572: (2.053762654s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063572 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063572 --no-kubernetes --driver=docker  --container-runtime=crio: (8.633909389s)
--- PASS: TestNoKubernetes/serial/Start (8.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-063572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-063572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.619966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-063572
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-063572: (1.250524803s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063572 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063572 --driver=docker  --container-runtime=crio: (7.703271545s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-063572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-063572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.358738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-266335
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestPause/serial/Start (49.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-845789 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0605 18:13:47.236097  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-845789 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.9799709s)
--- PASS: TestPause/serial/Start (49.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-arm64 start -p false-703503 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-703503 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (313.042036ms)

                                                
                                                
-- stdout --
	* [false-703503] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0605 18:15:24.384363  542952 out.go:296] Setting OutFile to fd 1 ...
	I0605 18:15:24.384547  542952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:15:24.384554  542952 out.go:309] Setting ErrFile to fd 2...
	I0605 18:15:24.384560  542952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0605 18:15:24.384768  542952 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16634-402421/.minikube/bin
	I0605 18:15:24.385209  542952 out.go:303] Setting JSON to false
	I0605 18:15:24.386383  542952 start.go:127] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10657,"bootTime":1685978268,"procs":422,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0605 18:15:24.386451  542952 start.go:137] virtualization:  
	I0605 18:15:24.391688  542952 out.go:177] * [false-703503] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0605 18:15:24.393892  542952 out.go:177]   - MINIKUBE_LOCATION=16634
	I0605 18:15:24.393952  542952 notify.go:220] Checking for updates...
	I0605 18:15:24.396177  542952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0605 18:15:24.399420  542952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16634-402421/kubeconfig
	I0605 18:15:24.401539  542952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16634-402421/.minikube
	I0605 18:15:24.404006  542952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0605 18:15:24.406625  542952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0605 18:15:24.409033  542952 config.go:182] Loaded profile config "force-systemd-flag-300073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0605 18:15:24.409165  542952 driver.go:375] Setting default libvirt URI to qemu:///system
	I0605 18:15:24.475225  542952 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0605 18:15:24.475388  542952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0605 18:15:24.600080  542952 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-06-05 18:15:24.585738327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215166976 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1]] Warnings:<nil>}}
	I0605 18:15:24.600188  542952 docker.go:294] overlay module found
	I0605 18:15:24.602762  542952 out.go:177] * Using the docker driver based on user configuration
	I0605 18:15:24.604925  542952 start.go:297] selected driver: docker
	I0605 18:15:24.604956  542952 start.go:875] validating driver "docker" against <nil>
	I0605 18:15:24.604971  542952 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0605 18:15:24.607631  542952 out.go:177] 
	W0605 18:15:24.610169  542952 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0605 18:15:24.613054  542952 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-703503 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-703503" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-703503

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-703503"

                                                
                                                
----------------------- debugLogs end: false-703503 [took: 5.28880213s] --------------------------------
helpers_test.go:175: Cleaning up "false-703503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-703503
--- PASS: TestNetworkPlugins/group/false (5.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-162380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0605 18:18:47.236285  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-162380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m2.162531646s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-162380 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3cca798b-6620-4818-9088-ef3e8bffcdfa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3cca798b-6620-4818-9088-ef3e8bffcdfa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.036485285s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-162380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-162380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-162380 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-162380 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-162380 --alsologtostderr -v=3: (12.158788857s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162380 -n old-k8s-version-162380
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162380 -n old-k8s-version-162380: exit status 7 (74.996104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-162380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (432.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-162380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-162380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m12.467763104s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162380 -n old-k8s-version-162380
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (432.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-836670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0605 18:19:50.698200  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 18:19:52.210084  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-836670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m10.433559195s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836670 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61150396-ae86-4eed-83d3-97cc3b9fa716] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61150396-ae86-4eed-83d3-97cc3b9fa716] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.030421426s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-836670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-836670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-836670 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-836670 --alsologtostderr -v=3: (12.162876459s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-836670 -n no-preload-836670
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-836670 -n no-preload-836670: exit status 7 (69.772423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-836670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (628.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-836670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0605 18:21:49.166265  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 18:22:53.750196  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 18:23:47.236026  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 18:24:50.697533  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-836670 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (10m27.696430716s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-836670 -n no-preload-836670
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (628.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6kpkj" [6030b4ed-2762-4955-a1bf-d73ff1a400a6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021861171s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6kpkj" [6030b4ed-2762-4955-a1bf-d73ff1a400a6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007993583s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-162380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-162380 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-162380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-162380 --alsologtostderr -v=1: (1.311127668s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162380 -n old-k8s-version-162380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162380 -n old-k8s-version-162380: exit status 2 (450.686825ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-162380 -n old-k8s-version-162380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-162380 -n old-k8s-version-162380: exit status 2 (447.219891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-162380 --alsologtostderr -v=1
E0605 18:26:49.166099  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162380 -n old-k8s-version-162380
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-162380 -n old-k8s-version-162380
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-277862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-277862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m22.891827475s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-277862 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee73c82a-6e84-4660-903b-5354c8d82cd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee73c82a-6e84-4660-903b-5354c8d82cd0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.027711582s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-277862 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-277862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-277862 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-277862 --alsologtostderr -v=3
E0605 18:28:30.282546  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-277862 --alsologtostderr -v=3: (12.267325907s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-277862 -n embed-certs-277862
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-277862 -n embed-certs-277862: exit status 7 (81.339514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-277862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (610.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-277862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0605 18:28:47.236278  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 18:28:59.152742  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.158117  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.168849  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.189128  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.229354  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.309703  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.470047  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:28:59.791076  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:00.431755  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:01.712398  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:04.273423  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:09.393623  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:19.634630  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:40.114857  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:29:50.697391  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 18:30:21.075717  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:31:42.996679  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-277862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (10m9.987029319s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-277862 -n embed-certs-277862
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (610.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-zjg5n" [7f340660-28ed-42db-adb4-6fc215a3465d] Running
E0605 18:31:49.166286  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025256707s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-zjg5n" [7f340660-28ed-42db-adb4-6fc215a3465d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007887701s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-836670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-836670 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-836670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-836670 -n no-preload-836670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-836670 -n no-preload-836670: exit status 2 (356.432793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-836670 -n no-preload-836670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-836670 -n no-preload-836670: exit status 2 (353.671894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-836670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-836670 -n no-preload-836670
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-836670 -n no-preload-836670
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-162919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-162919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m20.905831107s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-162919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c64f934c-53f8-4eab-b8c6-2fdd9b6f4d39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c64f934c-53f8-4eab-b8c6-2fdd9b6f4d39] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.029412798s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-162919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-162919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-162919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021062899s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-162919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-162919 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-162919 --alsologtostderr -v=3: (12.150451011s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919: exit status 7 (106.65401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-162919 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (638.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-162919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0605 18:33:47.235668  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 18:33:59.153137  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:34:26.837021  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:34:50.697696  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 18:35:53.970630  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:53.975977  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:53.986295  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:54.006818  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:54.047059  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:54.127470  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:54.287897  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:54.608517  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:55.249400  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:56.529568  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:35:59.089865  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:36:04.210642  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:36:14.450947  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:36:32.210702  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 18:36:34.931569  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:36:49.165636  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 18:37:15.892832  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:38:37.813205  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:38:47.235758  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-162919 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (10m38.232007977s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (638.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vjkmw" [b5bb8015-b839-4c5b-a1f2-0b5dfff92663] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024512841s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vjkmw" [b5bb8015-b839-4c5b-a1f2-0b5dfff92663] Running
E0605 18:38:59.152430  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007874269s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-277862 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-277862 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-277862 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-277862 -n embed-certs-277862
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-277862 -n embed-certs-277862: exit status 2 (338.487966ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-277862 -n embed-certs-277862
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-277862 -n embed-certs-277862: exit status 2 (342.452892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-277862 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-277862 -n embed-certs-277862
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-277862 -n embed-certs-277862
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-264617 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0605 18:39:33.751088  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 18:39:50.697723  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-264617 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (57.66717198s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-264617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-264617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020994941s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-264617 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-264617 --alsologtostderr -v=3: (1.271265482s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-264617 -n newest-cni-264617
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-264617 -n newest-cni-264617: exit status 7 (80.305662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-264617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-264617 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-264617 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (30.41043765s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-264617 -n newest-cni-264617
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-264617 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-264617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-264617 -n newest-cni-264617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-264617 -n newest-cni-264617: exit status 2 (361.5497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-264617 -n newest-cni-264617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-264617 -n newest-cni-264617: exit status 2 (366.969541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-264617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-264617 -n newest-cni-264617
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-264617 -n newest-cni-264617
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p auto-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0605 18:40:53.970855  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:41:21.653819  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
E0605 18:41:49.165929  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p auto-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.360074171s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mfhk2" [50f57ecf-8400-4493-827b-769e4e2efaad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-mfhk2" [50f57ecf-8400-4493-827b-769e4e2efaad] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.008206686s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-703503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0605 18:43:47.236149  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.102272263s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7tbk6" [ef98814b-d959-4323-85c2-02829797c0b7] Running
E0605 18:43:59.152868  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.032667962s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-5j4fg" [6e9c8dc7-2583-4f2b-923f-bab7dc216375] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-5j4fg" [6e9c8dc7-2583-4f2b-923f-bab7dc216375] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00795329s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-703503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qvq77" [427f28c9-72e2-43d6-a840-e3e2d405ff1f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025738687s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qvq77" [427f28c9-72e2-43d6-a840-e3e2d405ff1f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010006729s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-162919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-162919 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-162919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-162919 --alsologtostderr -v=1: (1.07774383s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919: exit status 2 (440.803082ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919: exit status 2 (411.858891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-162919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-162919 --alsologtostderr -v=1: (1.172323304s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-162919 -n default-k8s-diff-port-162919
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.60s)
E0605 18:48:58.983519  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:58.988766  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:58.999011  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:59.019361  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:59.059595  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:59.139871  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:59.153275  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:48:59.300130  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:48:59.620618  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:00.261182  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:01.542125  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:04.102561  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:04.577170  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:49:09.223432  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:19.464515  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:39.944836  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/kindnet-703503/client.crt: no such file or directory
E0605 18:49:45.537995  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:49:48.444361  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:49:50.700792  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p calico-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p calico-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m25.120293797s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0605 18:44:50.697678  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/ingress-addon-legacy-980425/client.crt: no such file or directory
E0605 18:45:10.282746  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
E0605 18:45:22.197966  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/old-k8s-version-162380/client.crt: no such file or directory
E0605 18:45:53.971396  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/no-preload-836670/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.312522461s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-zsdmz" [cab02c54-8da0-4ca2-8f26-e69e3b6c65ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-zsdmz" [cab02c54-8da0-4ca2-8f26-e69e3b6c65ca] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.00803134s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vcsgz" [2b9849b9-79f4-4fe0-bf90-5cd1de3d2ad4] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.038534198s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-d9k7k" [268552cd-dc63-407d-a1e2-bb001af44860] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-d9k7k" [268552cd-dc63-407d-a1e2-bb001af44860] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008956342s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-703503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-703503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (95.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m35.173190426s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (95.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0605 18:46:49.166111  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/functional-083977/client.crt: no such file or directory
E0605 18:47:04.601740  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:04.607044  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:04.617218  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:04.637409  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:04.677615  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:04.757859  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:04.918184  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:05.239192  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:05.879472  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:07.160422  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:09.720596  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:14.841562  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:25.082486  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
E0605 18:47:45.563165  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/auto-703503/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p flannel-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.109186815s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vpr8w" [563f3112-8694-40e0-8fb0-1b9d3183d5a8] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.026240488s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vm9kg" [220004d6-5784-49c3-90bd-670b253730fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vm9kg" [220004d6-5784-49c3-90bd-670b253730fc] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009019597s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hml7w" [a3679283-9c60-45b8-b460-f896f868921a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hml7w" [a3679283-9c60-45b8-b460-f896f868921a] Running
E0605 18:48:23.613692  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:23.620336  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:23.630601  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:23.651087  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:23.691332  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:23.772229  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:23.933329  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:24.253797  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011252942s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-703503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-703503 exec deployment/netcat -- nslookup kubernetes.default
E0605 18:48:24.894060  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0605 18:48:44.096450  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/default-k8s-diff-port-162919/client.crt: no such file or directory
E0605 18:48:47.237737  407813 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/addons-735995/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-arm64 start -p bridge-703503 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.344627052s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-703503 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-703503 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vxgjp" [26dfc1cf-0231-4de0-9f7b-c32abf886b56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vxgjp" [26dfc1cf-0231-4de0-9f7b-c32abf886b56] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007746289s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-703503 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-703503 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (28/296)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-501309 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-501309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-501309
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-902944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-902944
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-703503 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-703503" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-703503

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-703503"

                                                
                                                
----------------------- debugLogs end: kubenet-703503 [took: 5.208342585s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-703503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-703503
--- SKIP: TestNetworkPlugins/group/kubenet (5.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-703503 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-703503" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16634-402421/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 05 Jun 2023 18:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-300073
contexts:
- context:
cluster: force-systemd-flag-300073
extensions:
- extension:
last-update: Mon, 05 Jun 2023 18:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: force-systemd-flag-300073
name: force-systemd-flag-300073
current-context: force-systemd-flag-300073
kind: Config
preferences: {}
users:
- name: force-systemd-flag-300073
user:
client-certificate: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/force-systemd-flag-300073/client.crt
client-key: /home/jenkins/minikube-integration/16634-402421/.minikube/profiles/force-systemd-flag-300073/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-703503

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-703503" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-703503"

                                                
                                                
----------------------- debugLogs end: cilium-703503 [took: 5.513621395s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-703503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-703503
--- SKIP: TestNetworkPlugins/group/cilium (5.77s)

                                                
                                    
Copied to clipboard