Test Report: Docker_Linux_crio_arm64 11942

                    
                      7a38b95f8e8296ab5337ba84b22cfa25f776c266:2021-07-09
                    
                

Test fail (16/256)

x
+
TestAddons/parallel/Registry (177.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: registry stabilized in 27.14118ms
addons_test.go:299: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:340: "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
addons_test.go:299: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013473216s
addons_test.go:302: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:340: "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:302: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006462426s
addons_test.go:307: (dbg) Run:  kubectl --context addons-20210708230204-257783 delete po -l run=registry-test --now
addons_test.go:312: (dbg) Run:  kubectl --context addons-20210708230204-257783 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:312: (dbg) Done: kubectl --context addons-20210708230204-257783 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.821148969s)
addons_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 ip
2021/07/08 23:05:17 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:05:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:05:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:05:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:05:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:05:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:33 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:05:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:05:34 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:34 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:05:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:05:40 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:40 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:05:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:49 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:05:49 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:49 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:05:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:50 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:05:52 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:52 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:05:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:05:56 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:06:04 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:05 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:06:05 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:05 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:06:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:06 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:06:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:06:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:12 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:06:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:22 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:06:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:06:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:23 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:06:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:06:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:06:37 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:41 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:06:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:06:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:06:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:06:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:06:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:59 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:06:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:06:59 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:07:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:07:02 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:02 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:07:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:06 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:07:14 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:18 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:07:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:07:19 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:19 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:07:21 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:21 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:07:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:07:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:44 [DEBUG] GET http://192.168.49.2:5000
2021/07/08 23:07:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/07/08 23:07:45 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:45 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/07/08 23:07:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/07/08 23:07:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/07/08 23:07:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/07/08 23:07:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:352: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210708230204-257783
helpers_test.go:236: (dbg) docker inspect addons-20210708230204-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33",
	        "Created": "2021-07-08T23:02:08.861515915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:02:09.476547454Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/hostname",
	        "HostsPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/hosts",
	        "LogPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33-json.log",
	        "Name": "/addons-20210708230204-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210708230204-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210708230204-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210708230204-257783",
	                "Source": "/var/lib/docker/volumes/addons-20210708230204-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210708230204-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210708230204-257783",
	                "name.minikube.sigs.k8s.io": "addons-20210708230204-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2a6a80abfdd90c8450743b36a071af48d5dfe35af3935906d8f359ff63e391d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49502"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e2a6a80abfdd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210708230204-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "077ecedfa7d5",
	                        "addons-20210708230204-257783"
	                    ],
	                    "NetworkID": "1f94ce698172ccdc730b6d5814ec69a10719715b26179dea78e95db77a131746",
	                    "EndpointID": "222e425dd05f5203c7200057bfd152a381a3ac67c46e9ae1bda0a98569a14d86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 logs -n 25
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	| delete  | -p                                    | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	|         | download-only-20210708230110-257783   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	|         | download-only-20210708230110-257783   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-docker-20210708230149-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:02:04 UTC | Thu, 08 Jul 2021 23:02:04 UTC |
	|         | download-docker-20210708230149-257783 |                                       |         |         |                               |                               |
	| start   | -p                                    | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:02:04 UTC | Thu, 08 Jul 2021 23:05:04 UTC |
	|         | addons-20210708230204-257783          |                                       |         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |         |         |                               |                               |
	|         | --addons=registry                     |                                       |         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |         |         |                               |                               |
	|         | --addons=olm                          |                                       |         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |         |         |                               |                               |
	|         | --driver=docker                       |                                       |         |         |                               |                               |
	|         | --container-runtime=crio              |                                       |         |         |                               |                               |
	|         | --addons=ingress                      |                                       |         |         |                               |                               |
	|         | --addons=gcp-auth                     |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:05:17 UTC | Thu, 08 Jul 2021 23:05:17 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:07:59 UTC | Thu, 08 Jul 2021 23:08:00 UTC |
	|         | addons disable registry               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:02:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:02:04.595093  258367 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:02:04.595210  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:02:04.595232  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:02:04.595242  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:02:04.595370  258367 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:02:04.595646  258367 out.go:293] Setting JSON to false
	I0708 23:02:04.596449  258367 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6273,"bootTime":1625779051,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:02:04.596515  258367 start.go:121] virtualization:  
	I0708 23:02:04.599175  258367 out.go:165] * [addons-20210708230204-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:02:04.601946  258367 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:02:04.600696  258367 notify.go:169] Checking for updates...
	I0708 23:02:04.604376  258367 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:02:04.606615  258367 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:02:04.609018  258367 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:02:04.609162  258367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:02:04.654113  258367 docker.go:132] docker version: linux-20.10.7
	I0708 23:02:04.654191  258367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:02:04.757973  258367 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:02:04.699594566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:02:04.758088  258367 docker.go:244] overlay module found
	I0708 23:02:04.760790  258367 out.go:165] * Using the docker driver based on user configuration
	I0708 23:02:04.760807  258367 start.go:278] selected driver: docker
	I0708 23:02:04.760812  258367 start.go:751] validating driver "docker" against <nil>
	I0708 23:02:04.760826  258367 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0708 23:02:04.760863  258367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0708 23:02:04.760877  258367 out.go:230] ! Your cgroup does not allow setting memory.
	I0708 23:02:04.763505  258367 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0708 23:02:04.763788  258367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:02:04.843896  258367 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:02:04.79378416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLice
nse: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:02:04.844011  258367 start_flags.go:261] no existing cluster config was found, will generate one from the flags 
	I0708 23:02:04.844165  258367 start_flags.go:687] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 23:02:04.844188  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:04.844194  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:04.844202  258367 start_flags.go:270] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 23:02:04.844213  258367 start_flags.go:275] config:
	{Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:02:04.846957  258367 out.go:165] * Starting control plane node addons-20210708230204-257783 in cluster addons-20210708230204-257783
	I0708 23:02:04.846989  258367 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:02:04.849261  258367 out.go:165] * Pulling base image ...
	I0708 23:02:04.849281  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:04.849315  258367 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:02:04.849325  258367 cache.go:56] Caching tarball of preloaded images
	I0708 23:02:04.849482  258367 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:02:04.849503  258367 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:02:04.849776  258367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json ...
	I0708 23:02:04.849798  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json: {Name:mk6e320fc3a23d8bae7a0dedef336e80220bbb8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:04.849933  258367 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:02:04.882996  258367 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:02:04.883027  258367 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:02:04.883046  258367 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:02:04.883069  258367 start.go:313] acquiring machines lock for addons-20210708230204-257783: {Name:mk70de6724665814088ca786aa95a9c4f42a89ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:02:04.883169  258367 start.go:317] acquired machines lock for "addons-20210708230204-257783" in 87.3µs
	I0708 23:02:04.883190  258367 start.go:89] Provisioning new machine with config: &{Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:02:04.883252  258367 start.go:126] createHost starting for "" (driver="docker")
	I0708 23:02:04.885753  258367 out.go:192] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0708 23:02:04.885967  258367 start.go:160] libmachine.API.Create for "addons-20210708230204-257783" (driver="docker")
	I0708 23:02:04.885995  258367 client.go:168] LocalClient.Create starting
	I0708 23:02:04.886069  258367 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem
	I0708 23:02:05.051741  258367 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem
	I0708 23:02:05.311405  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0708 23:02:05.343632  258367 cli_runner.go:162] docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0708 23:02:05.343702  258367 network_create.go:255] running [docker network inspect addons-20210708230204-257783] to gather additional debugging logs...
	I0708 23:02:05.343720  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783
	W0708 23:02:05.374816  258367 cli_runner.go:162] docker network inspect addons-20210708230204-257783 returned with exit code 1
	I0708 23:02:05.374839  258367 network_create.go:258] error running [docker network inspect addons-20210708230204-257783]: docker network inspect addons-20210708230204-257783: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210708230204-257783
	I0708 23:02:05.374849  258367 network_create.go:260] output of [docker network inspect addons-20210708230204-257783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210708230204-257783
	
	** /stderr **
	I0708 23:02:05.374904  258367 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:02:05.406189  258367 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400000eff8] misses:0}
	I0708 23:02:05.406225  258367 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0708 23:02:05.406242  258367 network_create.go:106] attempt to create docker network addons-20210708230204-257783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0708 23:02:05.406286  258367 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210708230204-257783
	I0708 23:02:05.548723  258367 network_create.go:90] docker network addons-20210708230204-257783 192.168.49.0/24 created
	I0708 23:02:05.548749  258367 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210708230204-257783" container
	I0708 23:02:05.548819  258367 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0708 23:02:05.580386  258367 cli_runner.go:115] Run: docker volume create addons-20210708230204-257783 --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --label created_by.minikube.sigs.k8s.io=true
	I0708 23:02:05.612609  258367 oci.go:102] Successfully created a docker volume addons-20210708230204-257783
	I0708 23:02:05.612679  258367 cli_runner.go:115] Run: docker run --rm --name addons-20210708230204-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --entrypoint /usr/bin/test -v addons-20210708230204-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0708 23:02:08.695369  258367 cli_runner.go:168] Completed: docker run --rm --name addons-20210708230204-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --entrypoint /usr/bin/test -v addons-20210708230204-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (3.082656193s)
	I0708 23:02:08.695392  258367 oci.go:106] Successfully prepared a docker volume addons-20210708230204-257783
	W0708 23:02:08.695416  258367 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0708 23:02:08.695423  258367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0708 23:02:08.695474  258367 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0708 23:02:08.695676  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:08.695770  258367 kic.go:179] Starting extracting preloaded images to volume ...
	I0708 23:02:08.695817  258367 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210708230204-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0708 23:02:08.824606  258367 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210708230204-257783 --name addons-20210708230204-257783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210708230204-257783 --network addons-20210708230204-257783 --ip 192.168.49.2 --volume addons-20210708230204-257783:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0708 23:02:09.490697  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Running}}
	I0708 23:02:09.549362  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:09.602057  258367 cli_runner.go:115] Run: docker exec addons-20210708230204-257783 stat /var/lib/dpkg/alternatives/iptables
	I0708 23:02:09.698879  258367 oci.go:278] the created container "addons-20210708230204-257783" has a running status.
	I0708 23:02:09.698906  258367 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa...
	I0708 23:02:10.039045  258367 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0708 23:02:10.218918  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:10.268015  258367 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0708 23:02:10.268054  258367 kic_runner.go:115] Args: [docker exec --privileged addons-20210708230204-257783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0708 23:02:19.993425  258367 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210708230204-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (11.297574532s)
	I0708 23:02:19.993449  258367 kic.go:188] duration metric: took 11.297752 seconds to extract preloaded images to volume
	I0708 23:02:19.993522  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:20.033201  258367 machine.go:88] provisioning docker machine ...
	I0708 23:02:20.033239  258367 ubuntu.go:169] provisioning hostname "addons-20210708230204-257783"
	I0708 23:02:20.033296  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:20.073522  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:20.073698  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:20.073711  258367 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210708230204-257783 && echo "addons-20210708230204-257783" | sudo tee /etc/hostname
	I0708 23:02:20.195296  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210708230204-257783
	
	I0708 23:02:20.195361  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:20.239296  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:20.239447  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:20.239473  258367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210708230204-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210708230204-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210708230204-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:02:20.358452  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:02:20.358475  258367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:02:20.358493  258367 ubuntu.go:177] setting up certificates
	I0708 23:02:20.358501  258367 provision.go:83] configureAuth start
	I0708 23:02:20.358550  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:20.392386  258367 provision.go:137] copyHostCerts
	I0708 23:02:20.392450  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:02:20.392535  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:02:20.392595  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:02:20.392646  258367 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.addons-20210708230204-257783 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210708230204-257783]
	I0708 23:02:21.241180  258367 provision.go:171] copyRemoteCerts
	I0708 23:02:21.241232  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:02:21.241271  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.274111  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.353415  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:02:21.367359  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:02:21.381206  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 23:02:21.395073  258367 provision.go:86] duration metric: configureAuth took 1.036561369s
	I0708 23:02:21.395089  258367 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:02:21.395329  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.428650  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:21.428842  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:21.428857  258367 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:02:21.545263  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:02:21.545312  258367 machine.go:91] provisioned docker machine in 1.512083207s
	I0708 23:02:21.545325  258367 client.go:171] LocalClient.Create took 16.659321464s
	I0708 23:02:21.545341  258367 start.go:168] duration metric: libmachine.API.Create for "addons-20210708230204-257783" took 16.659372186s
	I0708 23:02:21.545355  258367 start.go:267] post-start starting for "addons-20210708230204-257783" (driver="docker")
	I0708 23:02:21.545361  258367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:02:21.545424  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:02:21.545473  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.578362  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.657484  258367 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:02:21.659635  258367 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:02:21.659659  258367 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:02:21.659671  258367 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:02:21.659680  258367 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:02:21.659688  258367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:02:21.659746  258367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:02:21.659773  258367 start.go:270] post-start completed in 114.410955ms
	I0708 23:02:21.660043  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:21.693487  258367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json ...
	I0708 23:02:21.693686  258367 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:02:21.693730  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.726619  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.803192  258367 start.go:129] duration metric: createHost completed in 16.919928817s
	I0708 23:02:21.803212  258367 start.go:80] releasing machines lock for "addons-20210708230204-257783", held for 16.920036259s
	I0708 23:02:21.803282  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:21.836270  258367 ssh_runner.go:149] Run: systemctl --version
	I0708 23:02:21.836314  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.836333  258367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:02:21.836381  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.875785  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.876742  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.955373  258367 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:02:22.089407  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:02:22.097604  258367 docker.go:153] disabling docker service ...
	I0708 23:02:22.097648  258367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:02:22.106161  258367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:02:22.113999  258367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:02:22.187402  258367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:02:22.276539  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:02:22.284617  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:02:22.295874  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:02:22.302331  258367 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:02:22.302355  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:02:22.309020  258367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:02:22.314349  258367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:02:22.319464  258367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:02:22.399957  258367 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:02:22.574141  258367 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:02:22.574208  258367 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:02:22.576998  258367 start.go:411] Will wait 60s for crictl version
	I0708 23:02:22.577041  258367 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:02:22.602143  258367 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:02:22.602207  258367 ssh_runner.go:149] Run: crio --version
	I0708 23:02:22.668574  258367 ssh_runner.go:149] Run: crio --version
	I0708 23:02:22.736496  258367 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:02:22.736572  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:02:22.768937  258367 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0708 23:02:22.771623  258367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 23:02:22.779326  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:22.779408  258367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:02:22.833839  258367 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:02:22.833860  258367 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:02:22.833905  258367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:02:22.855250  258367 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:02:22.855269  258367 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:02:22.855326  258367 ssh_runner.go:149] Run: crio config
	I0708 23:02:22.926289  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:22.926310  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:22.926319  258367 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:02:22.926333  258367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210708230204-257783 NodeName:addons-20210708230204-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:02:22.926456  258367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210708230204-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:02:22.926544  258367 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210708230204-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:02:22.926598  258367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:02:22.932481  258367 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:02:22.932526  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:02:22.937866  258367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0708 23:02:22.948342  258367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:02:22.958723  258367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
	I0708 23:02:22.968914  258367 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:02:22.971253  258367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 23:02:22.978510  258367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783 for IP: 192.168.49.2
	I0708 23:02:22.978544  258367 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:02:23.190106  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt ...
	I0708 23:02:23.190133  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt: {Name:mk5906ee5301ffc572d7fce2bd29e40064ac492c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.190305  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key ...
	I0708 23:02:23.190322  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key: {Name:mkb3a034c656a399e8a3b1d9af8b8f2247a84d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.190411  258367 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:02:23.411680  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt ...
	I0708 23:02:23.411701  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt: {Name:mkfacfbb209518217be8fd06056f51a62e70f58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.411817  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key ...
	I0708 23:02:23.411832  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key: {Name:mk60c06d0c1c23937aa87c9d7bc9822baf022041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.411935  258367 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key
	I0708 23:02:23.411946  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt with IP's: []
	I0708 23:02:23.821741  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt ...
	I0708 23:02:23.821759  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: {Name:mk73acb25b69f7dc2f7fe66039431368600627ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.821888  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key ...
	I0708 23:02:23.821902  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key: {Name:mk7826adaa22e4c26a84eba8050bc8619fdb79db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.821984  258367 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2
	I0708 23:02:23.821992  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0708 23:02:24.106323  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 ...
	I0708 23:02:24.106346  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2: {Name:mk3bbb246fdb644e03c711331594b91b252c5977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.106500  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2 ...
	I0708 23:02:24.106515  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2: {Name:mk07bce1d3519cbcd08d7913590fefe97615f3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.106597  258367 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt
	I0708 23:02:24.106652  258367 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key
	I0708 23:02:24.106701  258367 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key
	I0708 23:02:24.106710  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt with IP's: []
	I0708 23:02:24.505496  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt ...
	I0708 23:02:24.505515  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt: {Name:mk1f4245b8de4cb8f5296cbf241b13df7d0321b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.505635  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key ...
	I0708 23:02:24.505649  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key: {Name:mk879c3b7560b49027151dcf6f41f1374ceeca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.505807  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:02:24.505843  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:02:24.505870  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:02:24.505903  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:02:24.506929  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:02:24.521556  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 23:02:24.535315  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:02:24.549240  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:02:24.566238  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:02:24.579920  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:02:24.593499  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:02:24.607144  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:02:24.621166  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:02:24.635126  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:02:24.645377  258367 ssh_runner.go:149] Run: openssl version
	I0708 23:02:24.649491  258367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:02:24.655325  258367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.657850  258367 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.657899  258367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.661967  258367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:02:24.667605  258367 kubeadm.go:390] StartCluster: {Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:02:24.667676  258367 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:02:24.667724  258367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:02:24.689946  258367 cri.go:76] found id: ""
	I0708 23:02:24.689998  258367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:02:24.695552  258367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 23:02:24.700899  258367 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0708 23:02:24.700938  258367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 23:02:24.706313  258367 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 23:02:24.706345  258367 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0708 23:02:51.639152  258367 out.go:192]   - Generating certificates and keys ...
	I0708 23:02:51.642275  258367 out.go:192]   - Booting up control plane ...
	I0708 23:02:51.645712  258367 out.go:192]   - Configuring RBAC rules ...
	I0708 23:02:51.648194  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:51.648207  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:51.650911  258367 out.go:165] * Configuring CNI (Container Networking Interface) ...
	I0708 23:02:51.650974  258367 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0708 23:02:51.654228  258367 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.2/kubectl ...
	I0708 23:02:51.654241  258367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0708 23:02:51.665340  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 23:02:52.178474  258367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 23:02:52.178524  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.178573  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5 minikube.k8s.io/name=addons-20210708230204-257783 minikube.k8s.io/updated_at=2021_07_08T23_02_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.334017  258367 ops.go:34] apiserver oom_adj: -16
	I0708 23:02:52.334150  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.926162  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:53.425745  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:53.926612  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:54.426349  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:54.926224  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:55.426682  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:55.926123  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:56.426433  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:56.926151  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:57.425844  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:57.925720  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:58.425779  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:58.926601  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:59.426434  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:59.925788  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:00.425782  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:00.926167  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:01.425732  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:01.925980  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:02.426344  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:02.926043  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:03.425924  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:03.926241  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:04.425728  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:04.528289  258367 kubeadm.go:985] duration metric: took 12.349811507s to wait for elevateKubeSystemPrivileges.
	I0708 23:03:04.528311  258367 kubeadm.go:392] StartCluster complete in 39.860709437s
	I0708 23:03:04.528325  258367 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:03:04.528427  258367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:03:04.528864  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:03:05.058397  258367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210708230204-257783" rescaled to 1
	I0708 23:03:05.058451  258367 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:03:05.061804  258367 out.go:165] * Verifying Kubernetes components...
	I0708 23:03:05.061859  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:03:05.058488  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:03:05.058693  258367 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0708 23:03:05.061990  258367 addons.go:59] Setting volumesnapshots=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.062007  258367 addons.go:135] Setting addon volumesnapshots=true in "addons-20210708230204-257783"
	I0708 23:03:05.062033  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.062522  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.062625  258367 addons.go:59] Setting ingress=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.062640  258367 addons.go:135] Setting addon ingress=true in "addons-20210708230204-257783"
	I0708 23:03:05.062668  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.063063  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.063653  258367 addons.go:59] Setting metrics-server=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.063680  258367 addons.go:135] Setting addon metrics-server=true in "addons-20210708230204-257783"
	I0708 23:03:05.063738  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.064183  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.064242  258367 addons.go:59] Setting olm=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.064258  258367 addons.go:135] Setting addon olm=true in "addons-20210708230204-257783"
	I0708 23:03:05.064273  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.064666  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.064716  258367 addons.go:59] Setting registry=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.064729  258367 addons.go:135] Setting addon registry=true in "addons-20210708230204-257783"
	I0708 23:03:05.064744  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.065116  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065165  258367 addons.go:59] Setting storage-provisioner=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065177  258367 addons.go:135] Setting addon storage-provisioner=true in "addons-20210708230204-257783"
	W0708 23:03:05.065182  258367 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:03:05.065199  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.065568  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065625  258367 addons.go:59] Setting default-storageclass=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065639  258367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210708230204-257783"
	I0708 23:03:05.065837  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065881  258367 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065904  258367 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210708230204-257783"
	I0708 23:03:05.065927  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.066288  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.066342  258367 addons.go:59] Setting gcp-auth=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.098501  258367 mustload.go:65] Loading cluster: addons-20210708230204-257783
	I0708 23:03:05.098919  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.371366  258367 out.go:165]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0708 23:03:05.378321  258367 out.go:165]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0708 23:03:05.380422  258367 out.go:165]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0708 23:03:05.380489  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0708 23:03:05.380503  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0708 23:03:05.380556  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.377529  258367 addons.go:135] Setting addon default-storageclass=true in "addons-20210708230204-257783"
	W0708 23:03:05.381616  258367 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:03:05.381653  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.382135  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.400974  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0708 23:03:05.401045  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0708 23:03:05.401062  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0708 23:03:05.401113  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.419618  258367 out.go:165]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0708 23:03:05.419690  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 23:03:05.419788  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0708 23:03:05.419842  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.461104  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.461570  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0708 23:03:05.461617  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.465100  258367 out.go:165]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0708 23:03:05.468585  258367 out.go:165]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0708 23:03:05.487461  258367 node_ready.go:35] waiting up to 6m0s for node "addons-20210708230204-257783" to be "Ready" ...
	I0708 23:03:05.492140  258367 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:03:05.492222  258367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:03:05.492230  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:03:05.492276  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.504847  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 23:03:05.515760  258367 out.go:165]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0708 23:03:05.519387  258367 out.go:165]   - Using image registry:2.7.1
	I0708 23:03:05.519516  258367 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0708 23:03:05.519538  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0708 23:03:05.519599  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.661979  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0708 23:03:05.661925  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.661958  258367 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0708 23:03:05.667742  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0708 23:03:05.667802  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.670800  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0708 23:03:05.676575  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0708 23:03:05.683758  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0708 23:03:05.688314  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0708 23:03:05.694203  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0708 23:03:05.701383  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0708 23:03:05.708144  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0708 23:03:05.714269  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0708 23:03:05.714329  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0708 23:03:05.714337  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0708 23:03:05.714395  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.847067  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.879578  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.938388  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.943788  258367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:03:05.943837  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:03:05.943906  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.963212  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.963638  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.968031  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.033345  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.085378  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.088288  258367 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0708 23:03:06.088302  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0708 23:03:06.112546  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:03:06.168852  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0708 23:03:06.184062  258367 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0708 23:03:06.184080  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0708 23:03:06.201809  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 23:03:06.201825  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0708 23:03:06.221468  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0708 23:03:06.221486  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0708 23:03:06.246330  258367 addons.go:135] Setting addon gcp-auth=true in "addons-20210708230204-257783"
	I0708 23:03:06.246370  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:06.246846  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:06.288342  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0708 23:03:06.290388  258367 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0708 23:03:06.290404  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0708 23:03:06.297690  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0708 23:03:06.297703  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0708 23:03:06.300511  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0708 23:03:06.300523  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0708 23:03:06.309464  258367 out.go:165]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0708 23:03:06.312084  258367 out.go:165]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0708 23:03:06.312130  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0708 23:03:06.312141  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0708 23:03:06.312185  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:06.311243  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 23:03:06.312350  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0708 23:03:06.315934  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0708 23:03:06.315947  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0708 23:03:06.333850  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0708 23:03:06.333866  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0708 23:03:06.340033  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0708 23:03:06.347208  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:03:06.368086  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.373362  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 23:03:06.373381  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0708 23:03:06.407298  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0708 23:03:06.407832  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0708 23:03:06.407846  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0708 23:03:06.433603  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0708 23:03:06.433620  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0708 23:03:06.459997  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 23:03:06.521614  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0708 23:03:06.521634  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0708 23:03:06.526576  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0708 23:03:06.526592  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0708 23:03:06.630213  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0708 23:03:06.630233  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0708 23:03:06.665127  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0708 23:03:06.665146  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0708 23:03:06.730299  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0708 23:03:06.730316  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0708 23:03:06.757125  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0708 23:03:06.757143  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0708 23:03:06.802645  258367 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:06.802661  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0708 23:03:06.832185  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 23:03:06.832201  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0708 23:03:06.877465  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0708 23:03:06.877483  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0708 23:03:06.959711  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 23:03:06.962339  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:07.007218  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0708 23:03:07.007279  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0708 23:03:07.096016  258367 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591115427s)
	I0708 23:03:07.096079  258367 start.go:730] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0708 23:03:07.129552  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0708 23:03:07.129605  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0708 23:03:07.221658  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0708 23:03:07.221716  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0708 23:03:07.325389  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0708 23:03:07.325435  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0708 23:03:07.426703  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 23:03:07.426761  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0708 23:03:07.442363  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.329793015s)
	I0708 23:03:07.524298  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 23:03:07.757844  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:08.324926  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.036545542s)
	I0708 23:03:08.324953  258367 addons.go:313] Verifying addon registry=true in "addons-20210708230204-257783"
	I0708 23:03:08.327648  258367 out.go:165] * Verifying registry addon...
	I0708 23:03:08.329284  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0708 23:03:08.430886  258367 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 23:03:08.430908  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:08.965833  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:09.475295  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:09.989922  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:10.040204  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:10.520729  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:10.760315  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.413074224s)
	I0708 23:03:10.760389  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.42033799s)
	I0708 23:03:10.760409  258367 addons.go:313] Verifying addon ingress=true in "addons-20210708230204-257783"
	I0708 23:03:10.763179  258367 out.go:165] * Verifying ingress addon...
	I0708 23:03:10.764874  258367 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0708 23:03:10.813355  258367 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 23:03:10.813398  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:10.954984  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:11.316902  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:11.437680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:11.816977  258367 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 23:03:11.816992  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:11.933802  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.317784  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:12.440866  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.562931  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:12.826050  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:12.960758  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.989544  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (6.582214242s)
	W0708 23:03:12.989579  258367 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0708 23:03:12.989597  258367 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0708 23:03:12.989682  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.529663613s)
	I0708 23:03:12.989696  258367 addons.go:313] Verifying addon metrics-server=true in "addons-20210708230204-257783"
	I0708 23:03:12.989760  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.029992828s)
	I0708 23:03:12.989771  258367 addons.go:313] Verifying addon gcp-auth=true in "addons-20210708230204-257783"
	I0708 23:03:12.992443  258367 out.go:165] * Verifying gcp-auth addon...
	I0708 23:03:12.994031  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0708 23:03:12.990200  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.027809025s)
	W0708 23:03:12.994192  258367 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0708 23:03:12.994206  258367 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0708 23:03:13.031412  258367 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0708 23:03:13.031426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:13.266238  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0708 23:03:13.280326  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.755950239s)
	I0708 23:03:13.280349  258367 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210708230204-257783"
	I0708 23:03:13.282866  258367 out.go:165] * Verifying csi-hostpath-driver addon...
	I0708 23:03:13.284710  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0708 23:03:13.300997  258367 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0708 23:03:13.301018  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:13.323810  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:13.355033  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:13.456204  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:13.709988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:13.828512  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:13.829141  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:13.935230  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:14.055589  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:14.305521  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:14.316503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:14.435410  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:14.537594  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:14.807762  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:14.828786  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:14.871664  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.516602534s)
	I0708 23:03:14.871693  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.605427862s)
	I0708 23:03:14.934512  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:15.028279  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:15.033924  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:15.306879  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:15.316769  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:15.435470  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:15.541639  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:15.807294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:15.816837  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:15.933956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:16.034246  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:16.309311  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:16.321752  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:16.434021  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:16.533676  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:16.805508  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:16.815731  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:16.933818  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:17.033326  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:17.305073  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:17.316483  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:17.434168  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:17.528224  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:17.533933  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:17.804563  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:17.815690  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:17.934352  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:18.033474  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:18.306836  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:18.315822  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:18.433636  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:18.534190  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:18.878422  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:18.879963  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:18.934171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:19.033713  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:19.306039  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:19.316562  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:19.433898  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:19.533217  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:19.804900  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:19.815673  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:19.934357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:20.028418  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:20.033110  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:20.307396  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:20.315940  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:20.434357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:20.533275  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:20.805517  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:20.815912  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:20.934171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:21.033363  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:21.305599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:21.315940  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:21.434182  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:21.533779  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:21.806276  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:21.816677  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:21.934247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:22.028451  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:22.033427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:22.306106  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:22.316600  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:22.434222  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:22.534180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:22.805282  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:22.816023  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:22.934458  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:23.034254  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:23.305403  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:23.315955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:23.434280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:23.533816  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:23.805762  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:23.816193  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:23.933917  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:24.033525  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:24.305586  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:24.316060  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:24.434308  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:24.528203  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:24.533860  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:24.804747  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:24.816238  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:24.934405  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:25.033542  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:25.305381  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:25.315630  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:25.434115  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:25.533435  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:25.805258  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:25.815583  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:25.934247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:26.033468  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:26.313468  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:26.316207  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:26.434177  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:26.533378  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:26.805481  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:26.815971  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:26.933457  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:27.028244  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:27.033944  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:27.305978  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:27.316514  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:27.433572  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:27.533438  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:27.808338  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:27.815850  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:27.934256  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:28.033462  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:28.305603  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:28.316131  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:28.434164  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:28.533546  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:28.805406  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:28.815807  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.027114  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:29.028529  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:29.033307  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:29.305177  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:29.316667  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.434163  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:29.533817  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:29.804531  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:29.815964  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.934253  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:30.034141  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:30.304975  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:30.316394  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:30.434247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:30.533988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:30.804599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:30.815861  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:30.934212  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:31.033880  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:31.305080  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:31.316591  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:31.438379  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:31.528481  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:31.534085  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:31.805329  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:31.815830  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:31.934243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:32.033930  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:32.305955  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:32.316622  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:32.433806  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:32.534171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:32.805098  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:32.816697  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:32.934425  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:33.033830  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:33.305586  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:33.316129  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:33.434296  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:33.533867  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:33.805664  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:33.816014  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:33.934202  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:34.027838  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:34.033689  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:34.312483  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:34.325172  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:34.433975  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:34.533845  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:34.805613  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:34.816177  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:34.934061  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:35.033908  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:35.304911  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:35.316503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:35.433830  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:35.533555  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:35.805708  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:35.816011  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:35.934894  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:36.033706  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:36.305768  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:36.316210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:36.433880  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:36.527982  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:36.533718  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:36.806281  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:36.816802  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:36.934001  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:37.033899  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:37.305715  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:37.316438  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:37.434076  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:37.534167  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:37.805363  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:37.815663  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:37.934569  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:38.033576  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:38.305386  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:38.315919  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:38.434305  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:38.528076  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:38.533725  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:38.805854  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:38.816199  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:38.933965  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:39.161933  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:39.305111  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:39.316710  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:39.433757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:39.533280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:39.805180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:39.815640  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:39.933837  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:40.033359  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:40.305820  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:40.316500  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:40.434764  258367 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 23:03:40.434780  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:40.529064  258367 node_ready.go:49] node "addons-20210708230204-257783" has status "Ready":"True"
	I0708 23:03:40.529082  258367 node_ready.go:38] duration metric: took 35.041601595s waiting for node "addons-20210708230204-257783" to be "Ready" ...
	I0708 23:03:40.529090  258367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:03:40.536245  258367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:40.538782  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:40.805427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:40.815950  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:40.935684  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:41.033391  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:41.305384  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:41.315751  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:41.434230  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:41.534294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:41.805062  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:41.816413  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:41.934437  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:42.033426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:42.305936  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:42.316832  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:42.452020  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:42.537291  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:42.556855  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:42.808971  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:42.837617  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:42.948502  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:43.042597  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:43.306461  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:43.316194  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:43.435340  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:43.545175  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:43.808213  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:43.816653  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:43.939445  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:44.036294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:44.310153  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:44.316861  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:44.452869  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:44.534309  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:44.858439  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:44.859060  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:44.980373  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:45.033572  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:45.055378  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:45.307109  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:45.317405  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:45.436040  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:45.535852  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:45.813865  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:45.822168  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:45.938855  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:46.034757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:46.307667  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:46.318795  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:46.434514  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:46.534105  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:46.808438  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:46.816223  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:46.934772  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:47.047317  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:47.055451  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:47.309428  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:47.326146  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:47.435940  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:47.533825  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:47.818764  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:47.819392  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:47.939958  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:48.035140  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:48.310806  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:48.322692  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:48.435268  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:48.534774  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:48.805303  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:48.816955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:48.934285  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:49.034137  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:49.305083  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:49.317534  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:49.434132  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:49.534176  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:49.553147  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:49.806795  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:49.816930  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:49.935353  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:50.044577  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:50.308970  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:50.317116  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:50.453642  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:50.537013  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:50.809687  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:50.816605  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:50.934956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:51.034111  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:51.306357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:51.316788  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:51.434223  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:51.533862  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:51.555405  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:51.806049  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:51.816665  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:51.935239  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:52.036764  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:52.306561  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:52.317258  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:52.435120  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:52.534680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:52.812738  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:52.823062  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:52.935048  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:53.088180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:53.313593  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:53.316210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:53.435174  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:53.534856  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:53.568302  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:53.814037  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:53.828699  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:53.935094  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:54.034051  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:54.305943  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:54.316383  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:54.434884  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:54.533560  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:54.805349  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:54.816025  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:54.935440  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:55.035068  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:55.305204  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:55.316789  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:55.434307  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:55.534194  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:55.805524  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:55.816474  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:55.938616  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:56.035071  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:56.054514  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:56.306243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:56.317135  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:56.437740  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:56.534893  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:56.808396  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:56.818595  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:56.935008  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:57.033737  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:57.305195  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:57.316759  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:57.434680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:57.534533  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:57.805918  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:57.816886  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:57.934412  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:58.034045  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:58.053423  258367 pod_ready.go:92] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.053448  258367 pod_ready.go:81] duration metric: took 17.517183186s waiting for pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.053472  258367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.056899  258367 pod_ready.go:92] pod "etcd-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.056912  258367 pod_ready.go:81] duration metric: took 3.428532ms waiting for pod "etcd-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.056924  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.060405  258367 pod_ready.go:92] pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.060421  258367 pod_ready.go:81] duration metric: took 3.48906ms waiting for pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.060430  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.063897  258367 pod_ready.go:92] pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.063911  258367 pod_ready.go:81] duration metric: took 3.473676ms waiting for pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.063920  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dvf4" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.067194  258367 pod_ready.go:92] pod "kube-proxy-6dvf4" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.067211  258367 pod_ready.go:81] duration metric: took 3.28452ms waiting for pod "kube-proxy-6dvf4" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.067219  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.305241  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:58.316828  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:58.433981  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:58.452430  258367 pod_ready.go:92] pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.452441  258367 pod_ready.go:81] duration metric: took 385.215878ms waiting for pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.452450  258367 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.534269  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:58.805326  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:58.817091  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:58.934377  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:59.034175  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:59.350171  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:59.352070  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:59.434599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:59.534222  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:59.805415  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:59.815966  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:59.934956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:00.034180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:00.309488  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:00.318040  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:00.446679  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:00.535081  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:00.816881  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:00.833611  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:00.870726  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:00.940553  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:01.034681  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:01.307110  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:01.317003  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:01.434206  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:01.534362  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:01.807693  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:01.816875  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:01.934543  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:02.034256  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:02.314645  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:02.317830  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:02.434638  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:02.534555  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:02.805757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:02.816388  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:02.934273  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:03.034346  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:03.306135  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:03.316729  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:03.356699  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:03.434801  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:03.533628  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:03.806090  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:03.817066  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:03.935092  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:04.033904  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:04.308003  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:04.321774  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:04.437942  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:04.537834  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:04.811895  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:04.820598  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:04.941910  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:05.033681  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:05.306843  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:05.316309  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:05.434665  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:05.534399  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:05.808209  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:05.828690  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:05.867140  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:05.948427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:06.034144  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:06.310817  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:06.317048  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:06.438453  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:06.538009  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:06.814561  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:06.818503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:06.934667  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:07.034023  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:07.324320  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:07.329971  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:07.435453  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:07.534858  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:07.806398  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:07.816291  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:07.936272  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:08.045241  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:08.321615  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:08.322966  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:08.361134  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:08.444857  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:08.539896  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:08.810573  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:08.824748  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:08.950445  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:09.038119  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:09.309840  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:09.319094  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:09.451790  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:09.534821  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:09.806319  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:09.824601  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:09.948537  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:10.038714  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:10.306101  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:10.316712  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:10.444324  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:10.534107  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:10.816348  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:10.821169  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:10.880798  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:10.938689  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:11.035004  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:11.309429  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:11.316532  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:11.437001  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:11.534517  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:11.806765  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:11.817111  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:11.936678  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:12.035677  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:12.315185  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:12.319448  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:12.435665  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:12.533892  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:12.806413  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:12.816140  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:12.935226  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:13.033973  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:13.308652  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:13.318458  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:13.359912  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:13.434988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:13.535000  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:13.807981  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:13.817954  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:13.939485  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:14.035287  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:14.310831  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:14.317343  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:14.434300  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:14.533986  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:14.806121  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:14.817043  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:14.934166  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:15.034426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:15.323217  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:15.333228  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:15.361384  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:15.435068  258367 kapi.go:108] duration metric: took 1m7.105782912s to wait for kubernetes.io/minikube-addons=registry ...
	I0708 23:04:15.537812  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:15.813281  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:15.819581  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:16.049106  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:16.307540  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:16.316183  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:16.534458  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:16.806532  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:16.818477  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:17.033945  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:17.305444  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:17.315952  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:17.391245  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:17.536664  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:17.807013  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:17.817134  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:18.033576  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:18.307317  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:18.316474  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:18.534860  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:18.806288  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:18.821696  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.034372  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:19.312150  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:19.320785  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.539562  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:19.809151  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:19.819730  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.859154  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:20.034514  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:20.308034  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:20.318688  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:20.537429  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:20.828970  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:20.835630  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.037624  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:21.308202  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:21.317118  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.536565  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:21.835761  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:21.836860  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.860825  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:22.053067  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:22.322252  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:22.328107  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:22.537575  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:22.807319  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:22.817028  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:23.034601  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:23.306551  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:23.317288  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:23.534889  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:23.806323  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:23.817210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:24.034124  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:24.305483  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:24.316211  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:24.355999  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:24.535624  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:24.807872  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:24.818209  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:25.037306  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:25.311421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:25.322366  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:25.533957  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:25.805852  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:25.816446  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:26.034421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:26.306096  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:26.316792  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:26.356696  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:26.534030  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:26.806374  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:26.817396  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:27.034314  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:27.341253  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:27.348955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:27.544750  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:27.816243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:27.830215  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:28.037855  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:28.314775  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:28.332322  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:28.374180  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:28.545092  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:28.806885  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:28.817006  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:29.049276  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:29.307196  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:29.317899  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:29.714913  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:29.813421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:29.822469  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.034706  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:30.307815  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:30.317761  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.376585  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:30.536075  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:30.819025  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.820506  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:30.861001  258367 pod_ready.go:92] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"True"
	I0708 23:04:30.861017  258367 pod_ready.go:81] duration metric: took 32.408556096s waiting for pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace to be "Ready" ...
	I0708 23:04:30.861035  258367 pod_ready.go:38] duration metric: took 50.331926706s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:04:30.861054  258367 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:04:30.861071  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:30.861149  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:31.039139  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:31.120445  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:31.120490  258367 cri.go:76] found id: ""
	I0708 23:04:31.120510  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:31.120582  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.126428  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:31.126503  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:31.157908  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:31.157946  258367 cri.go:76] found id: ""
	I0708 23:04:31.157962  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:31.158024  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.160719  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:31.160800  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:31.195992  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:31.196010  258367 cri.go:76] found id: ""
	I0708 23:04:31.196015  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:31.196063  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.198761  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:31.198825  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:31.239007  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:31.239025  258367 cri.go:76] found id: ""
	I0708 23:04:31.239030  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:31.239073  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.241734  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:31.241798  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:31.272832  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:31.272850  258367 cri.go:76] found id: ""
	I0708 23:04:31.272856  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:31.272900  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.275666  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:31.275734  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:31.301615  258367 cri.go:76] found id: ""
	I0708 23:04:31.301628  258367 logs.go:270] 0 containers: []
	W0708 23:04:31.301634  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:31.301641  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:31.301678  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:31.311401  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:31.322172  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:31.346810  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:31.346834  258367 cri.go:76] found id: ""
	I0708 23:04:31.346840  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:31.346879  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.349712  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:31.349757  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:31.373832  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:31.373850  258367 cri.go:76] found id: ""
	I0708 23:04:31.373856  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:31.373899  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.376711  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:31.376736  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:31.414919  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:31.414940  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:31.472616  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:31.472645  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:31.534862  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:31.614324  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:31.614347  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:31.696068  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:31.697579  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:31.733579  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:31.733602  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:31.808651  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:31.826645  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:31.826667  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:31.830534  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:31.860747  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:31.860772  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:31.896690  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:31.896730  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:31.927741  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:31.927787  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:31.997716  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:31.997741  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:32.056732  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:32.056755  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:32.079686  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:32.360060  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:32.372384  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:32.406945  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:32.406966  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:32.447186  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:32.447204  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:32.447320  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:32.447329  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:32.447336  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:32.447342  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:32.447346  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:04:32.535609  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:32.871932  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:32.875314  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.035264  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:33.322729  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:33.325758  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.539049  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:33.809122  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.840811  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:34.038231  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:34.305821  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:34.316472  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:34.536871  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:34.806324  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:34.817609  258367 kapi.go:108] duration metric: took 1m24.052734457s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0708 23:04:35.034113  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:35.305618  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:35.534039  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:35.805464  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:36.033757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:36.305563  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:36.535022  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:36.810065  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:37.044021  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:37.306365  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:37.534507  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:37.806247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:38.034711  258367 kapi.go:108] duration metric: took 1m25.040677396s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0708 23:04:38.037692  258367 out.go:165] * Your GCP credentials will now be mounted into every pod created in the addons-20210708230204-257783 cluster.
	I0708 23:04:38.039905  258367 out.go:165] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0708 23:04:38.043013  258367 out.go:165] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0708 23:04:38.305726  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:38.805442  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:39.313290  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:39.806014  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:40.305951  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:40.805935  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:41.306167  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:41.807569  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.306223  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.448401  258367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:04:42.471087  258367 api_server.go:70] duration metric: took 1m37.412609728s to wait for apiserver process to appear ...
	I0708 23:04:42.471136  258367 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:04:42.471163  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:42.471209  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:42.498226  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:42.498240  258367 cri.go:76] found id: ""
	I0708 23:04:42.498245  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:42.498287  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.500629  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:42.500670  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:42.522032  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:42.522070  258367 cri.go:76] found id: ""
	I0708 23:04:42.522089  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:42.522123  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.524515  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:42.524555  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:42.544404  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:42.544416  258367 cri.go:76] found id: ""
	I0708 23:04:42.544421  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:42.544452  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.546783  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:42.546847  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:42.566390  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:42.566406  258367 cri.go:76] found id: ""
	I0708 23:04:42.566410  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:42.566444  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.568814  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:42.568853  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:42.589259  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:42.589295  258367 cri.go:76] found id: ""
	I0708 23:04:42.589306  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:42.589338  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.591563  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:42.591603  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:42.614367  258367 cri.go:76] found id: ""
	I0708 23:04:42.614381  258367 logs.go:270] 0 containers: []
	W0708 23:04:42.614386  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:42.614393  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:42.614447  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:42.635565  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:42.635602  258367 cri.go:76] found id: ""
	I0708 23:04:42.635617  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:42.635661  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.638113  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:42.638155  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:42.658400  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:42.658416  258367 cri.go:76] found id: ""
	I0708 23:04:42.658420  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:42.658462  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.660879  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:42.660896  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:42.804504  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:42.804554  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:42.813337  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.865302  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:42.865326  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:42.890504  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:42.890524  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:42.910953  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:42.910972  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:42.931942  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:42.931963  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:42.960735  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:42.960775  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:43.006287  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:43.006333  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:43.045345  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:43.045367  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:43.069858  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:43.069878  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:43.111980  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:43.112002  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:43.206087  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:43.206109  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:43.295091  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:43.296592  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:43.309878  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:43.337457  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:43.337472  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:43.337580  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:43.337591  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:43.337599  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:43.337608  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:43.337613  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:04:43.806280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:44.307515  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:44.805825  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:45.305526  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:45.805382  258367 kapi.go:108] duration metric: took 1m32.520669956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0708 23:04:45.807522  258367 out.go:165] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, volumesnapshots, olm, registry, ingress, gcp-auth, csi-hostpath-driver
	I0708 23:04:45.807541  258367 addons.go:344] enableAddons completed in 1m40.748851438s
	I0708 23:04:53.338820  258367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0708 23:04:53.347181  258367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0708 23:04:53.348070  258367 api_server.go:139] control plane version: v1.21.2
	I0708 23:04:53.348089  258367 api_server.go:129] duration metric: took 10.876941605s to wait for apiserver health ...
	I0708 23:04:53.348098  258367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:04:53.348115  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:53.348166  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:53.375754  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:53.375767  258367 cri.go:76] found id: ""
	I0708 23:04:53.375772  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:53.375811  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.378392  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:53.378435  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:53.398815  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:53.398829  258367 cri.go:76] found id: ""
	I0708 23:04:53.398833  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:53.398865  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.401349  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:53.401392  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:53.421390  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:53.421404  258367 cri.go:76] found id: ""
	I0708 23:04:53.421409  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:53.421442  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.423799  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:53.423844  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:53.443510  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:53.443526  258367 cri.go:76] found id: ""
	I0708 23:04:53.443531  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:53.443560  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.445900  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:53.445940  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:53.466255  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:53.466268  258367 cri.go:76] found id: ""
	I0708 23:04:53.466273  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:53.466303  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.468712  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:53.468766  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:53.488311  258367 cri.go:76] found id: ""
	I0708 23:04:53.488323  258367 logs.go:270] 0 containers: []
	W0708 23:04:53.488328  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:53.488342  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:53.488393  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:53.508339  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:53.508353  258367 cri.go:76] found id: ""
	I0708 23:04:53.508357  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:53.508388  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.510777  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:53.510819  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:53.530634  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:53.530668  258367 cri.go:76] found id: ""
	I0708 23:04:53.530682  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:53.530721  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.533156  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:53.533169  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:53.558018  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:53.558035  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:53.581895  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:53.581912  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:53.633038  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:53.633079  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:53.661558  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:53.661578  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:53.686131  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:53.686151  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:53.711706  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:53.711749  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:53.756467  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:53.756491  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:53.822558  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:53.824068  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:53.869132  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:53.869150  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:53.908521  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:53.908541  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:54.042355  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:54.042381  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:54.143221  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:54.143246  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:54.173768  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:54.173789  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:54.173883  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:54.173893  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:54.173901  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:54.173911  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:54.173915  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:05:04.184611  258367 system_pods.go:59] 18 kube-system pods found
	I0708 23:05:04.184638  258367 system_pods.go:61] "coredns-558bd4d5db-zhg8q" [fbfe6d76-09e7-4b56-8c35-638662f1daaf] Running
	I0708 23:05:04.184644  258367 system_pods.go:61] "csi-hostpath-attacher-0" [06368aac-1744-4da9-99a2-47bfbb93254b] Running
	I0708 23:05:04.184648  258367 system_pods.go:61] "csi-hostpath-provisioner-0" [c6e3d45b-6ca3-4462-b533-40a15141f9ed] Running
	I0708 23:05:04.184653  258367 system_pods.go:61] "csi-hostpath-resizer-0" [58e0abcf-e21c-4040-8d81-a9d93f509885] Running
	I0708 23:05:04.184657  258367 system_pods.go:61] "csi-hostpath-snapshotter-0" [b4319b60-be58-4608-ba01-f1a8fac1d376] Running
	I0708 23:05:04.184667  258367 system_pods.go:61] "csi-hostpathplugin-0" [d94ef600-bad7-4301-b190-602faa1f36a9] Running
	I0708 23:05:04.184672  258367 system_pods.go:61] "etcd-addons-20210708230204-257783" [cf116694-48f9-416c-a5dc-55fdf60853ea] Running
	I0708 23:05:04.184679  258367 system_pods.go:61] "kindnet-ccnc6" [b4d243ff-dec0-4629-b9b4-ae527c2c32bd] Running
	I0708 23:05:04.184684  258367 system_pods.go:61] "kube-apiserver-addons-20210708230204-257783" [f058b0e7-2349-4f21-8599-37d21a5ddcd9] Running
	I0708 23:05:04.184695  258367 system_pods.go:61] "kube-controller-manager-addons-20210708230204-257783" [3f3546e8-dfaf-419b-8eeb-4e7fe07af5fc] Running
	I0708 23:05:04.184699  258367 system_pods.go:61] "kube-proxy-6dvf4" [564c5852-25e0-4d8f-8cb8-ac83fac6ee51] Running
	I0708 23:05:04.184709  258367 system_pods.go:61] "kube-scheduler-addons-20210708230204-257783" [cd710c4c-a3d0-427d-bcab-0ebd546d7cc0] Running
	I0708 23:05:04.184713  258367 system_pods.go:61] "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
	I0708 23:05:04.184725  258367 system_pods.go:61] "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 23:05:04.184734  258367 system_pods.go:61] "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
	I0708 23:05:04.184740  258367 system_pods.go:61] "snapshot-controller-989f9ddc8-dw52m" [e4c2411c-bf6b-4d75-9cb7-5d6404e63c8e] Running
	I0708 23:05:04.184745  258367 system_pods.go:61] "snapshot-controller-989f9ddc8-wplln" [7933b7c4-ea5d-4dd9-b07a-b6d744284afe] Running
	I0708 23:05:04.184751  258367 system_pods.go:61] "storage-provisioner" [b28c0bf2-6ddc-4279-a4b8-40712894afe3] Running
	I0708 23:05:04.184756  258367 system_pods.go:74] duration metric: took 10.836653435s to wait for pod list to return data ...
	I0708 23:05:04.184778  258367 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:05:04.186977  258367 default_sa.go:45] found service account: "default"
	I0708 23:05:04.186991  258367 default_sa.go:55] duration metric: took 2.203718ms for default service account to be created ...
	I0708 23:05:04.186997  258367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:05:04.194163  258367 system_pods.go:86] 18 kube-system pods found
	I0708 23:05:04.194189  258367 system_pods.go:89] "coredns-558bd4d5db-zhg8q" [fbfe6d76-09e7-4b56-8c35-638662f1daaf] Running
	I0708 23:05:04.194195  258367 system_pods.go:89] "csi-hostpath-attacher-0" [06368aac-1744-4da9-99a2-47bfbb93254b] Running
	I0708 23:05:04.194201  258367 system_pods.go:89] "csi-hostpath-provisioner-0" [c6e3d45b-6ca3-4462-b533-40a15141f9ed] Running
	I0708 23:05:04.194210  258367 system_pods.go:89] "csi-hostpath-resizer-0" [58e0abcf-e21c-4040-8d81-a9d93f509885] Running
	I0708 23:05:04.194215  258367 system_pods.go:89] "csi-hostpath-snapshotter-0" [b4319b60-be58-4608-ba01-f1a8fac1d376] Running
	I0708 23:05:04.194223  258367 system_pods.go:89] "csi-hostpathplugin-0" [d94ef600-bad7-4301-b190-602faa1f36a9] Running
	I0708 23:05:04.194228  258367 system_pods.go:89] "etcd-addons-20210708230204-257783" [cf116694-48f9-416c-a5dc-55fdf60853ea] Running
	I0708 23:05:04.194239  258367 system_pods.go:89] "kindnet-ccnc6" [b4d243ff-dec0-4629-b9b4-ae527c2c32bd] Running
	I0708 23:05:04.194244  258367 system_pods.go:89] "kube-apiserver-addons-20210708230204-257783" [f058b0e7-2349-4f21-8599-37d21a5ddcd9] Running
	I0708 23:05:04.194251  258367 system_pods.go:89] "kube-controller-manager-addons-20210708230204-257783" [3f3546e8-dfaf-419b-8eeb-4e7fe07af5fc] Running
	I0708 23:05:04.194256  258367 system_pods.go:89] "kube-proxy-6dvf4" [564c5852-25e0-4d8f-8cb8-ac83fac6ee51] Running
	I0708 23:05:04.194265  258367 system_pods.go:89] "kube-scheduler-addons-20210708230204-257783" [cd710c4c-a3d0-427d-bcab-0ebd546d7cc0] Running
	I0708 23:05:04.194270  258367 system_pods.go:89] "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
	I0708 23:05:04.194281  258367 system_pods.go:89] "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 23:05:04.194286  258367 system_pods.go:89] "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
	I0708 23:05:04.194295  258367 system_pods.go:89] "snapshot-controller-989f9ddc8-dw52m" [e4c2411c-bf6b-4d75-9cb7-5d6404e63c8e] Running
	I0708 23:05:04.194299  258367 system_pods.go:89] "snapshot-controller-989f9ddc8-wplln" [7933b7c4-ea5d-4dd9-b07a-b6d744284afe] Running
	I0708 23:05:04.194306  258367 system_pods.go:89] "storage-provisioner" [b28c0bf2-6ddc-4279-a4b8-40712894afe3] Running
	I0708 23:05:04.194311  258367 system_pods.go:126] duration metric: took 7.310147ms to wait for k8s-apps to be running ...
	I0708 23:05:04.194322  258367 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:05:04.194365  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:05:04.211032  258367 system_svc.go:56] duration metric: took 16.707896ms WaitForService to wait for kubelet.
	I0708 23:05:04.211068  258367 kubeadm.go:547] duration metric: took 1m59.15259272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:05:04.211096  258367 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:05:04.213899  258367 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:05:04.213926  258367 node_conditions.go:123] node cpu capacity is 2
	I0708 23:05:04.213937  258367 node_conditions.go:105] duration metric: took 2.836867ms to run NodePressure ...
	I0708 23:05:04.213945  258367 start.go:225] waiting for startup goroutines ...
	I0708 23:05:04.557167  258367 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:05:04.559326  258367 out.go:165] * Done! kubectl is now configured to use "addons-20210708230204-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:02:10 UTC, end at Thu 2021-07-08 23:08:00 UTC. --
	Jul 08 23:07:11 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:11.188699621Z" level=info msg="Started container 1c62578597a0aaf434a03c912a2d42171c607e979abcf94530f2bf4a872aca6a: olm/catalog-operator-75d496484d-m4465/catalog-operator" id=04f260d8-6a22-4d40-bc39-ccd29e4dca0c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:07:12 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:12.000399842Z" level=info msg="Removing container: 65088ae236413c0f620aaeeb1be0752a5a6e0ef2b8f1b7b34cc0fe59fd0a1d16" id=3c6f7f39-06b6-4dfb-ad12-69d2122e1083 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:07:12 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:12.026118374Z" level=info msg="Removed container 65088ae236413c0f620aaeeb1be0752a5a6e0ef2b8f1b7b34cc0fe59fd0a1d16: olm/catalog-operator-75d496484d-m4465/catalog-operator" id=3c6f7f39-06b6-4dfb-ad12-69d2122e1083 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.081756307Z" level=info msg="Checking image status: quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" id=a89be3d4-db14-4dc6-a64b-66c610348939 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.082579058Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2,RepoTags:[],RepoDigests:[quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607],Size_:228537074,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a89be3d4-db14-4dc6-a64b-66c610348939 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.083089242Z" level=info msg="Checking image status: quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" id=8250e3eb-77df-46ef-bd20-94de10d4050e name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.083824988Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2,RepoTags:[],RepoDigests:[quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607],Size_:228537074,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8250e3eb-77df-46ef-bd20-94de10d4050e name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.084509814Z" level=info msg="Creating container: olm/olm-operator-859c88c96-mqphx/olm-operator" id=c699a20d-6d38-445c-b194-7943f466860a name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.166479736Z" level=info msg="Created container 1103f7c28b604d6a8559bf485ed7f8a9743d4a039e4e46a33388f57e7a467363: olm/olm-operator-859c88c96-mqphx/olm-operator" id=c699a20d-6d38-445c-b194-7943f466860a name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.166843675Z" level=info msg="Starting container: 1103f7c28b604d6a8559bf485ed7f8a9743d4a039e4e46a33388f57e7a467363" id=349a9924-3ed8-4504-809e-9918e5f31b0c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:07:23 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:23.177827535Z" level=info msg="Started container 1103f7c28b604d6a8559bf485ed7f8a9743d4a039e4e46a33388f57e7a467363: olm/olm-operator-859c88c96-mqphx/olm-operator" id=349a9924-3ed8-4504-809e-9918e5f31b0c name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:07:24 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:24.020799220Z" level=info msg="Removing container: fbb95717b24b5e0e27792b8f2c5114f4d303e7cbd5b004ee20d51fa377a29dfd" id=8266193f-0e30-449a-858d-7a2b3fbe541f name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:07:24 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:24.046797385Z" level=info msg="Removed container fbb95717b24b5e0e27792b8f2c5114f4d303e7cbd5b004ee20d51fa377a29dfd: olm/olm-operator-859c88c96-mqphx/olm-operator" id=8266193f-0e30-449a-858d-7a2b3fbe541f name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:07:59 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:59.888162484Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=a146ae4a-1dc1-4043-9dcf-57e85d6cd54d name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:07:59 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:07:59.888742698Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d055819ed991a06271de68c9bc251fdc3d007c30e8166f814d0cdbd656c0d259,RepoTags:[k8s.gcr.io/pause:3.4.1],RepoDigests:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause@sha256:e3da12d02952a9f87ffe8d193f8a5d85a218cf728bc4dc713b055c2c05d8b370],Size_:491161,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a146ae4a-1dc1-4043-9dcf-57e85d6cd54d name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.018268983Z" level=info msg="Stopping container: 87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700 (timeout: 30s)" id=8c8cd9f9-2250-43e6-904c-f50f48783c44 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.085121080Z" level=info msg="Stopping pod sandbox: ac4f99f4688b4bed4b3b0121d2b6b1181005ba90f60ca8e043ca8cd5794c2f69" id=261863bb-4b9f-4c70-b851-29ba80169048 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.096901577Z" level=info msg="Got pod network &{Name:registry-proxy-fbwfb Namespace:kube-system ID:ac4f99f4688b4bed4b3b0121d2b6b1181005ba90f60ca8e043ca8cd5794c2f69 NetNS:/var/run/netns/6b27cef6-c4ea-469b-a4eb-d6ff1bf5c922 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.097067974Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.253987151Z" level=info msg="Stopped container 87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700: kube-system/registry-pzwnr/registry" id=8c8cd9f9-2250-43e6-904c-f50f48783c44 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.254418166Z" level=info msg="Stopping pod sandbox: de3d35583903159f7570ced5ecd8a5985db8008e892c61151f00eb81163dfe29" id=6f9806bc-6fbe-4302-9f40-af7a016886f6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.255483276Z" level=info msg="Got pod network &{Name:registry-pzwnr Namespace:kube-system ID:de3d35583903159f7570ced5ecd8a5985db8008e892c61151f00eb81163dfe29 NetNS:/var/run/netns/c83a85ba-3097-417c-b2c6-7da7b787619c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.255641402Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.366251833Z" level=info msg="Stopped pod sandbox: ac4f99f4688b4bed4b3b0121d2b6b1181005ba90f60ca8e043ca8cd5794c2f69" id=261863bb-4b9f-4c70-b851-29ba80169048 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:08:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:08:00.411326496Z" level=info msg="Stopped pod sandbox: de3d35583903159f7570ced5ecd8a5985db8008e892c61151f00eb81163dfe29" id=6f9806bc-6fbe-4302-9f40-af7a016886f6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                   CREATED              STATE               NAME                                     ATTEMPT             POD ID
	1103f7c28b604       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                                                        37 seconds ago       Exited              olm-operator                             5                   39716b2bcc234
	1c62578597a0a       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                                                        49 seconds ago       Exited              catalog-operator                         5                   0d57c0d81cf66
	75916813072f7       60dc18151daf8df97f82f5d510aaf2657916cb473abf872ddeec9df443d333ce                                                                        About a minute ago   Exited              registry-proxy                           5                   ac4f99f4688b4
	6bf62f4f8e219       k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994                            3 minutes ago        Running             liveness-probe                           0                   2f77e85aceccf
	803e5d4030e9e       k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659                           3 minutes ago        Running             hostpath                                 0                   2f77e85aceccf
	9762e7f41feee       k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108                3 minutes ago        Running             node-driver-registrar                    0                   2f77e85aceccf
	8367f78bf3b8f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:278602dc54dda5f556390dc98ddce825e5b90107a0e4beb14cb89bd9325de316                            3 minutes ago        Running             gcp-auth                                 0                   ee327571534b3
	8dbb9502aae74       k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a                             3 minutes ago        Running             controller                               0                   33e8fba615e5a
	ab02c52bc4621       k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16   3 minutes ago        Running             csi-external-health-monitor-controller   0                   2f77e85aceccf
	afcafc7107eb1       k8s.gcr.io/sig-storage/csi-attacher@sha256:10c8fe02bd83dc9fb53b4735050a0b5fabe65f74c7abea05c1e6467fa4d038db                             3 minutes ago        Running             csi-attacher                             0                   23ae8dfefaeec
	45e7094c8895e       k8s.gcr.io/metrics-server/metrics-server@sha256:dbc33d7d35d2a9cc5ab402005aa7a0d13be6192f3550c7d42cba8d2d5e3a5d62                        3 minutes ago        Running             metrics-server                           0                   3a53499f5a1c0
	2ab95409e7926       622522dfd285bd96f991dd541ad5ecd1086dabf33175b7cde1f8e316594ab589                                                                        3 minutes ago        Exited              patch                                    1                   f2470e2108307
	684ad78f0b081       docker.io/jettech/kube-webhook-certgen@sha256:9634f69ece1c801da236dae728cd672ead86cdd299722495dd7ad7089cf4bcd0                          3 minutes ago        Exited              create                                   0                   0881c48c05a0f
	87a6fc1950193       docker.io/library/registry@sha256:d42f9d2035ce5b9181ae8cc81d5646a2070a33c8125e21dc0d9e8dbddba97d69                                      3 minutes ago        Exited              registry                                 0                   de3d355839031
	2ab14000c1ca7       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7                          3 minutes ago        Exited              patch                                    0                   2f99d4ef7cfb0
	bc89c8667db10       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7                          3 minutes ago        Exited              create                                   0                   74c774f80ed84
	8652c6dc4e0b7       k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:94b72ebcd597276ae63068fe4740feee2fe31de38bb18c37c98c42d8cd618a36        3 minutes ago        Running             csi-external-health-monitor-agent        0                   2f77e85aceccf
	6294c5731c68b       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      3 minutes ago        Running             volume-snapshot-controller               0                   398745dd67d7b
	ace6c80110b95       k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782                          3 minutes ago        Running             csi-snapshotter                          0                   39ceb5bd02da9
	114ee95dee761       k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a                              4 minutes ago        Running             csi-resizer                              0                   9711e6a0a3c71
	9a348818456cb       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      4 minutes ago        Running             volume-snapshot-controller               0                   0f64dff40c83a
	bb436f463bf93       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                        4 minutes ago        Running             storage-provisioner                      0                   d5ad87804e6dd
	b6a45c30ce188       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8                                                                        4 minutes ago        Running             coredns                                  0                   d821186888bb4
	1e5a748a457fa       k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2                          4 minutes ago        Running             csi-provisioner                          0                   2a236376d61fe
	49b31069db4e9       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105                                                                        4 minutes ago        Running             kube-proxy                               0                   93a788d293a45
	73ad782fb9631       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301                                                                        4 minutes ago        Running             kindnet-cni                              0                   9598713e2c095
	44be06430cace       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4                                                                        5 minutes ago        Running             kube-scheduler                           0                   a9740d6135af4
	8f4ecb2eb8a37       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630                                                                        5 minutes ago        Running             kube-controller-manager                  0                   9266c7ca5f01a
	31b86861c38b7       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0                                                                        5 minutes ago        Running             kube-apiserver                           0                   15a6cd929d883
	22dcce2859577       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                                                        5 minutes ago        Running             etcd                                     0                   64b65798fba40
	
	* 
	* ==> coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210708230204-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210708230204-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=addons-20210708230204-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_02_52_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210708230204-257783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210708230204-257783"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:02:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210708230204-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:07:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:05:40 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:05:40 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:05:40 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:05:40 +0000   Thu, 08 Jul 2021 23:03:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210708230204-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                d6a6fe2c-69df-437d-be5e-65297693e451
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5954cc4898-gghhg                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-xzc2t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m51s
	  kube-system                 coredns-558bd4d5db-zhg8q                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m57s
	  kube-system                 csi-hostpath-attacher-0                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 csi-hostpath-provisioner-0                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 csi-hostpath-resizer-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 csi-hostpath-snapshotter-0                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 csi-hostpathplugin-0                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 etcd-addons-20210708230204-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-ccnc6                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-addons-20210708230204-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-addons-20210708230204-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-6dvf4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-addons-20210708230204-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 metrics-server-77c99ccb96-g7fdg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 registry-proxy-fbwfb                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 registry-pzwnr                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 snapshot-controller-989f9ddc8-dw52m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 snapshot-controller-989f9ddc8-wplln                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  olm                         catalog-operator-75d496484d-m4465                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m49s
	  olm                         olm-operator-859c88c96-mqphx                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1070m (53%!)(MISSING)  100m (5%!)(MISSING)
	  memory             850Mi (10%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  5m20s (x5 over 5m20s)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x5 over 5m20s)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x4 over 5m20s)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m1s                   kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s                   kubelet     Node addons-20210708230204-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s                   kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m51s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m21s                  kubelet     Node addons-20210708230204-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000490] FS-Cache: N-cookie c=0000000017e17a7f [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000786] FS-Cache: N-cookie d=0000000052778918 n=0000000001cad34c
	[  +0.000659] FS-Cache: N-key=[8] '2e75010000000000'
	[  +0.311255] FS-Cache: Duplicate cookie detected
	[  +0.000504] FS-Cache: O-cookie c=0000000014ac9dbc [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000814] FS-Cache: O-cookie d=0000000052778918 n=00000000bafd5126
	[  +0.000704] FS-Cache: O-key=[8] '2c75010000000000'
	[  +0.000510] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000812] FS-Cache: N-cookie d=0000000052778918 n=00000000edbe8e34
	[  +0.000658] FS-Cache: N-key=[8] '2c75010000000000'
	[  +0.000965] FS-Cache: Duplicate cookie detected
	[  +0.000522] FS-Cache: O-cookie c=00000000f7e9a7d0 [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000899] FS-Cache: O-cookie d=0000000052778918 n=000000008aaa8b20
	[  +0.000656] FS-Cache: O-key=[8] '2e75010000000000'
	[  +0.000483] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000799] FS-Cache: N-cookie d=0000000052778918 n=00000000d5f43b3c
	[  +0.000664] FS-Cache: N-key=[8] '2e75010000000000'
	[  +0.000960] FS-Cache: Duplicate cookie detected
	[  +0.000564] FS-Cache: O-cookie c=000000005908ab4f [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000814] FS-Cache: O-cookie d=0000000052778918 n=00000000bdc5b826
	[  +0.000669] FS-Cache: O-key=[8] '2d75010000000000'
	[  +0.000501] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000808] FS-Cache: N-cookie d=0000000052778918 n=000000005db15c82
	[  +0.000658] FS-Cache: N-key=[8] '2d75010000000000'
	[Jul 8 22:38] tee (195612): /proc/195320/oom_adj is deprecated, please use /proc/195320/oom_score_adj instead.
	
	* 
	* ==> etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] <==
	* 2021-07-08 23:03:57.016640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:04:07.016506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:04:17.016701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:04:27.016397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:04:37.016508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:04:47.016182 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:04:57.016525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:05:07.016543 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:05:17.016783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:05:27.017028 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:05:37.016572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:05:47.017090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:05:57.017153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:06:07.016938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:06:17.016424 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:06:27.016458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:06:37.016608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:06:47.016746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:06:57.016549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:07:07.017246 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:07:17.016390 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:07:27.016204 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:07:37.016548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:07:47.016814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:07:57.016869 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:08:01 up  1:50,  0 users,  load average: 0.64, 1.17, 1.50
	Linux addons-20210708230204-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] <==
	* W0708 23:04:11.037321       1 handler_proxy.go:102] no RequestInfo found in the context
	E0708 23:04:11.037359       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0708 23:04:11.037366       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0708 23:04:30.796405       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.175.74:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.175.74:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.175.74:443: connect: connection refused
	E0708 23:04:30.796969       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.175.74:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.175.74:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.175.74:443: connect: connection refused
	E0708 23:04:30.802901       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.175.74:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.175.74:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.175.74:443: connect: connection refused
	I0708 23:04:36.009536       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:04:36.009571       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:04:36.009579       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:05:13.727829       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:05:13.727867       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:05:13.727874       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:05:53.936712       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:05:53.936747       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:05:53.936755       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:06:31.749145       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:06:31.749181       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:06:31.749188       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:07:07.997424       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:07:07.997463       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:07:07.997471       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:07:39.019246       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:07:39.019282       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:07:39.019289       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] <==
	* I0708 23:03:13.149180       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-provisioner" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful"
	I0708 23:03:13.206198       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-resizer" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful"
	I0708 23:03:13.295241       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-snapshotter" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful"
	E0708 23:03:34.145628       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0708 23:03:34.145779       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
	I0708 23:03:34.145868       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
	I0708 23:03:34.145908       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0708 23:03:34.145942       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
	I0708 23:03:34.145974       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0708 23:03:34.145999       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0708 23:03:34.146075       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0708 23:03:34.446530       1 shared_informer.go:247] Caches are synced for resource quota 
	W0708 23:03:34.686123       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0708 23:03:34.696347       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0708 23:03:34.698522       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0708 23:03:34.699610       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0708 23:03:34.800676       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:03:40.212641       1 event.go:291] "Event occurred" object="kube-system/registry-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-fbwfb"
	I0708 23:03:43.891811       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0708 23:04:04.478540       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0708 23:04:04.830903       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0708 23:04:07.312540       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0708 23:04:09.202084       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0708 23:04:17.352167       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0708 23:04:18.369277       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	
	* 
	* ==> kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] <==
	* I0708 23:03:09.902381       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0708 23:03:09.906684       1 server_others.go:140] Detected node IP 192.168.49.2
	W0708 23:03:09.906750       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:03:10.210383       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:03:10.210464       1 server_others.go:212] Using iptables Proxier.
	I0708 23:03:10.210493       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:03:10.210520       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:03:10.210831       1 server.go:643] Version: v1.21.2
	I0708 23:03:10.211356       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:03:10.211432       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:03:10.212794       1 config.go:315] Starting service config controller
	I0708 23:03:10.212844       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:03:10.212883       1 config.go:224] Starting endpoint slice config controller
	I0708 23:03:10.212913       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:03:10.271414       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:03:10.348217       1 shared_informer.go:247] Caches are synced for service config 
	W0708 23:03:10.386942       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:03:10.413707       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] <==
	* I0708 23:02:43.963281       1 serving.go:347] Generated self-signed cert in-memory
	W0708 23:02:48.492553       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 23:02:48.492619       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 23:02:48.492651       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 23:02:48.492673       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 23:02:48.569268       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0708 23:02:48.569786       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 23:02:48.569806       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 23:02:48.569820       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0708 23:02:48.580237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:02:48.580442       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:02:48.580560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:02:48.580655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:02:48.580751       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:02:48.580849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:02:48.580946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:02:48.581068       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.581168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:02:48.581258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:02:48.587927       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.588058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.588185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:02:48.588270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:49.542237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0708 23:02:50.170585       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:02:10 UTC, end at Thu 2021-07-08 23:08:01 UTC. --
	Jul 08 23:07:37 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:37.094591    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/crio-1103f7c28b604d6a8559bf485ed7f8a9743d4a039e4e46a33388f57e7a467363.scope\": RecentStats: unable to find data in memory cache], [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:07:37 addons-20210708230204-257783 kubelet[1413]: W0708 23:07:37.356200    1413 container.go:586] Failed to update stats for container "/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33": /sys/fs/cgroup/cpuset/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:07:41 addons-20210708230204-257783 kubelet[1413]: I0708 23:07:41.081555    1413 scope.go:111] "RemoveContainer" containerID="1c62578597a0aaf434a03c912a2d42171c607e979abcf94530f2bf4a872aca6a"
	Jul 08 23:07:41 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:41.081917    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:07:43 addons-20210708230204-257783 kubelet[1413]: I0708 23:07:43.081704    1413 scope.go:111] "RemoveContainer" containerID="75916813072f7e2d498671dfcf02cadd941bfb60efba7c60c81c9d1e1230b62d"
	Jul 08 23:07:43 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:43.081959    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-fbwfb_kube-system(040628b4-50ba-4169-a8d6-b9804b46e10c)\"" pod="kube-system/registry-proxy-fbwfb" podUID=040628b4-50ba-4169-a8d6-b9804b46e10c
	Jul 08 23:07:45 addons-20210708230204-257783 kubelet[1413]: W0708 23:07:45.638807    1413 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Jul 08 23:07:45 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:45.660341    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:07:48 addons-20210708230204-257783 kubelet[1413]: I0708 23:07:48.082225    1413 scope.go:111] "RemoveContainer" containerID="1103f7c28b604d6a8559bf485ed7f8a9743d4a039e4e46a33388f57e7a467363"
	Jul 08 23:07:48 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:48.083196    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:07:52 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:52.133187    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:07:55 addons-20210708230204-257783 kubelet[1413]: I0708 23:07:55.081597    1413 scope.go:111] "RemoveContainer" containerID="1c62578597a0aaf434a03c912a2d42171c607e979abcf94530f2bf4a872aca6a"
	Jul 08 23:07:55 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:55.081951    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:07:55 addons-20210708230204-257783 kubelet[1413]: I0708 23:07:55.082231    1413 scope.go:111] "RemoveContainer" containerID="75916813072f7e2d498671dfcf02cadd941bfb60efba7c60c81c9d1e1230b62d"
	Jul 08 23:07:55 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:55.082417    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-fbwfb_kube-system(040628b4-50ba-4169-a8d6-b9804b46e10c)\"" pod="kube-system/registry-proxy-fbwfb" podUID=040628b4-50ba-4169-a8d6-b9804b46e10c
	Jul 08 23:07:55 addons-20210708230204-257783 kubelet[1413]: E0708 23:07:55.736144    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:08:00 addons-20210708230204-257783 kubelet[1413]: I0708 23:08:00.084278    1413 scope.go:111] "RemoveContainer" containerID="1103f7c28b604d6a8559bf485ed7f8a9743d4a039e4e46a33388f57e7a467363"
	Jul 08 23:08:00 addons-20210708230204-257783 kubelet[1413]: E0708 23:08:00.084668    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:08:00 addons-20210708230204-257783 kubelet[1413]: E0708 23:08:00.177998    1413 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2592ead28da0ccb145ba02c911e24818e6ff0e46a8673a52a7bdd52bd4ac27e8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2592ead28da0ccb145ba02c911e24818e6ff0e46a8673a52a7bdd52bd4ac27e8/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/olm_catalog-operator-75d496484d-m4465_9e760c3b-a92b-4bfe-9207-05ac187021fb/catalog-operator/4.log" to get inode usage: stat /var/log/pods/olm_catalog-operator-75d496484d-m4465_9e760c3b-a92b-4bfe-9207-05ac187021fb/catalog-operator/4.log: no such file or directory
	Jul 08 23:08:00 addons-20210708230204-257783 kubelet[1413]: E0708 23:08:00.178157    1413 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/79e647939c814261228b3ad70553cac5255021ba1eefed69a17d00a179b14803/diff" to get inode usage: stat /var/lib/containers/storage/overlay/79e647939c814261228b3ad70553cac5255021ba1eefed69a17d00a179b14803/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/olm_olm-operator-859c88c96-mqphx_9003a85c-a958-402c-8dd8-812ba5acd952/olm-operator/4.log" to get inode usage: stat /var/log/pods/olm_olm-operator-859c88c96-mqphx_9003a85c-a958-402c-8dd8-812ba5acd952/olm-operator/4.log: no such file or directory
	Jul 08 23:08:01 addons-20210708230204-257783 kubelet[1413]: I0708 23:08:01.095428    1413 scope.go:111] "RemoveContainer" containerID="87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700"
	Jul 08 23:08:01 addons-20210708230204-257783 kubelet[1413]: I0708 23:08:01.126298    1413 scope.go:111] "RemoveContainer" containerID="87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700"
	Jul 08 23:08:01 addons-20210708230204-257783 kubelet[1413]: E0708 23:08:01.128317    1413 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700\": container with ID starting with 87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700 not found: ID does not exist" containerID="87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700"
	Jul 08 23:08:01 addons-20210708230204-257783 kubelet[1413]: I0708 23:08:01.128357    1413 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700} err="failed to get container status \"87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700\": rpc error: code = NotFound desc = could not find container \"87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700\": container with ID starting with 87a6fc1950193e0b19e18f9609b63048aab4c2cc52441cce300cc246be3bf700 not found: ID does not exist"
	Jul 08 23:08:01 addons-20210708230204-257783 kubelet[1413]: I0708 23:08:01.128369    1413 scope.go:111] "RemoveContainer" containerID="75916813072f7e2d498671dfcf02cadd941bfb60efba7c60c81c9d1e1230b62d"
	
	* 
	* ==> storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] <==
	* I0708 23:03:53.849841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:03:53.884827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:03:53.884984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:03:53.891142       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:03:53.891492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a!
	I0708 23:03:53.892324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32752ce7-5488-420e-924b-bb68b54fe2d8", APIVersion:"v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a became leader
	I0708 23:03:53.991866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210708230204-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: gcp-auth-certs-create-lkh9q gcp-auth-certs-patch-pw9m6 ingress-nginx-admission-create-xwrn5 ingress-nginx-admission-patch-pp588
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210708230204-257783 describe pod gcp-auth-certs-create-lkh9q gcp-auth-certs-patch-pw9m6 ingress-nginx-admission-create-xwrn5 ingress-nginx-admission-patch-pp588
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210708230204-257783 describe pod gcp-auth-certs-create-lkh9q gcp-auth-certs-patch-pw9m6 ingress-nginx-admission-create-xwrn5 ingress-nginx-admission-patch-pp588: exit status 1 (76.714268ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-lkh9q" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-pw9m6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-xwrn5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pp588" not found

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210708230204-257783 describe pod gcp-auth-certs-create-lkh9q gcp-auth-certs-patch-pw9m6 ingress-nginx-admission-create-xwrn5 ingress-nginx-admission-patch-pp588: exit status 1
--- FAIL: TestAddons/parallel/Registry (177.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (303.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:340: "ingress-nginx-admission-create-xwrn5" [0e325db5-26e4-4cdb-805a-d3f13029c19d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 3.972657ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210708230204-257783 replace --force -f testdata/nginx-ingv1beta.yaml
addons_test.go:170: kubectl --context addons-20210708230204-257783 replace --force -f testdata/nginx-ingv1beta.yaml: unexpected stderr: Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210708230204-257783 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:340: "nginx" [7a1859af-cf2c-47b7-8d0b-1cf304551016] Pending
helpers_test.go:340: "nginx" [7a1859af-cf2c-47b7-8d0b-1cf304551016] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:340: "nginx" [7a1859af-cf2c-47b7-8d0b-1cf304551016] Running
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.020058899s
addons_test.go:204: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-20210708230204-257783 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.749636361s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:230: (dbg) Run:  kubectl --context addons-20210708230204-257783 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-20210708230204-257783 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.724189099s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:275: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable ingress --alsologtostderr -v=1
addons_test.go:278: (dbg) Done: out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable ingress --alsologtostderr -v=1: (28.806729479s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210708230204-257783
helpers_test.go:236: (dbg) docker inspect addons-20210708230204-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33",
	        "Created": "2021-07-08T23:02:08.861515915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:02:09.476547454Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/hostname",
	        "HostsPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/hosts",
	        "LogPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33-json.log",
	        "Name": "/addons-20210708230204-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210708230204-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210708230204-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210708230204-257783",
	                "Source": "/var/lib/docker/volumes/addons-20210708230204-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210708230204-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210708230204-257783",
	                "name.minikube.sigs.k8s.io": "addons-20210708230204-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2a6a80abfdd90c8450743b36a071af48d5dfe35af3935906d8f359ff63e391d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49502"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e2a6a80abfdd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210708230204-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "077ecedfa7d5",
	                        "addons-20210708230204-257783"
	                    ],
	                    "NetworkID": "1f94ce698172ccdc730b6d5814ec69a10719715b26179dea78e95db77a131746",
	                    "EndpointID": "222e425dd05f5203c7200057bfd152a381a3ac67c46e9ae1bda0a98569a14d86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 logs -n 25
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	| delete  | -p                                    | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	|         | download-only-20210708230110-257783   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	|         | download-only-20210708230110-257783   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-docker-20210708230149-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:02:04 UTC | Thu, 08 Jul 2021 23:02:04 UTC |
	|         | download-docker-20210708230149-257783 |                                       |         |         |                               |                               |
	| start   | -p                                    | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:02:04 UTC | Thu, 08 Jul 2021 23:05:04 UTC |
	|         | addons-20210708230204-257783          |                                       |         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |         |         |                               |                               |
	|         | --addons=registry                     |                                       |         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |         |         |                               |                               |
	|         | --addons=olm                          |                                       |         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |         |         |                               |                               |
	|         | --driver=docker                       |                                       |         |         |                               |                               |
	|         | --container-runtime=crio              |                                       |         |         |                               |                               |
	|         | --addons=ingress                      |                                       |         |         |                               |                               |
	|         | --addons=gcp-auth                     |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:05:17 UTC | Thu, 08 Jul 2021 23:05:17 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:07:59 UTC | Thu, 08 Jul 2021 23:08:00 UTC |
	|         | addons disable registry               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:08:00 UTC | Thu, 08 Jul 2021 23:08:01 UTC |
	|         | logs -n 25                            |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:08:11 UTC | Thu, 08 Jul 2021 23:08:17 UTC |
	|         | addons disable gcp-auth               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:08:53 UTC | Thu, 08 Jul 2021 23:09:00 UTC |
	|         | addons disable                        |                                       |         |         |                               |                               |
	|         | csi-hostpath-driver                   |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:09:00 UTC | Thu, 08 Jul 2021 23:09:01 UTC |
	|         | addons disable volumesnapshots        |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:09:06 UTC | Thu, 08 Jul 2021 23:09:07 UTC |
	|         | addons disable metrics-server         |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:13:39 UTC | Thu, 08 Jul 2021 23:14:08 UTC |
	|         | addons disable ingress                |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:02:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:02:04.595093  258367 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:02:04.595210  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:02:04.595232  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:02:04.595242  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:02:04.595370  258367 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:02:04.595646  258367 out.go:293] Setting JSON to false
	I0708 23:02:04.596449  258367 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6273,"bootTime":1625779051,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:02:04.596515  258367 start.go:121] virtualization:  
	I0708 23:02:04.599175  258367 out.go:165] * [addons-20210708230204-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:02:04.601946  258367 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:02:04.600696  258367 notify.go:169] Checking for updates...
	I0708 23:02:04.604376  258367 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:02:04.606615  258367 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:02:04.609018  258367 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:02:04.609162  258367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:02:04.654113  258367 docker.go:132] docker version: linux-20.10.7
	I0708 23:02:04.654191  258367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:02:04.757973  258367 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:02:04.699594566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:02:04.758088  258367 docker.go:244] overlay module found
	I0708 23:02:04.760790  258367 out.go:165] * Using the docker driver based on user configuration
	I0708 23:02:04.760807  258367 start.go:278] selected driver: docker
	I0708 23:02:04.760812  258367 start.go:751] validating driver "docker" against <nil>
	I0708 23:02:04.760826  258367 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0708 23:02:04.760863  258367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0708 23:02:04.760877  258367 out.go:230] ! Your cgroup does not allow setting memory.
	I0708 23:02:04.763505  258367 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0708 23:02:04.763788  258367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:02:04.843896  258367 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:02:04.79378416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLice
nse: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:02:04.844011  258367 start_flags.go:261] no existing cluster config was found, will generate one from the flags 
	I0708 23:02:04.844165  258367 start_flags.go:687] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 23:02:04.844188  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:04.844194  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:04.844202  258367 start_flags.go:270] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 23:02:04.844213  258367 start_flags.go:275] config:
	{Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:02:04.846957  258367 out.go:165] * Starting control plane node addons-20210708230204-257783 in cluster addons-20210708230204-257783
	I0708 23:02:04.846989  258367 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:02:04.849261  258367 out.go:165] * Pulling base image ...
	I0708 23:02:04.849281  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:04.849315  258367 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:02:04.849325  258367 cache.go:56] Caching tarball of preloaded images
	I0708 23:02:04.849482  258367 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:02:04.849503  258367 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:02:04.849776  258367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json ...
	I0708 23:02:04.849798  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json: {Name:mk6e320fc3a23d8bae7a0dedef336e80220bbb8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:04.849933  258367 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:02:04.882996  258367 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:02:04.883027  258367 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:02:04.883046  258367 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:02:04.883069  258367 start.go:313] acquiring machines lock for addons-20210708230204-257783: {Name:mk70de6724665814088ca786aa95a9c4f42a89ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:02:04.883169  258367 start.go:317] acquired machines lock for "addons-20210708230204-257783" in 87.3µs
	I0708 23:02:04.883190  258367 start.go:89] Provisioning new machine with config: &{Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:02:04.883252  258367 start.go:126] createHost starting for "" (driver="docker")
	I0708 23:02:04.885753  258367 out.go:192] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0708 23:02:04.885967  258367 start.go:160] libmachine.API.Create for "addons-20210708230204-257783" (driver="docker")
	I0708 23:02:04.885995  258367 client.go:168] LocalClient.Create starting
	I0708 23:02:04.886069  258367 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem
	I0708 23:02:05.051741  258367 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem
	I0708 23:02:05.311405  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0708 23:02:05.343632  258367 cli_runner.go:162] docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0708 23:02:05.343702  258367 network_create.go:255] running [docker network inspect addons-20210708230204-257783] to gather additional debugging logs...
	I0708 23:02:05.343720  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783
	W0708 23:02:05.374816  258367 cli_runner.go:162] docker network inspect addons-20210708230204-257783 returned with exit code 1
	I0708 23:02:05.374839  258367 network_create.go:258] error running [docker network inspect addons-20210708230204-257783]: docker network inspect addons-20210708230204-257783: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210708230204-257783
	I0708 23:02:05.374849  258367 network_create.go:260] output of [docker network inspect addons-20210708230204-257783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210708230204-257783
	
	** /stderr **
	I0708 23:02:05.374904  258367 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:02:05.406189  258367 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400000eff8] misses:0}
	I0708 23:02:05.406225  258367 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0708 23:02:05.406242  258367 network_create.go:106] attempt to create docker network addons-20210708230204-257783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0708 23:02:05.406286  258367 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210708230204-257783
	I0708 23:02:05.548723  258367 network_create.go:90] docker network addons-20210708230204-257783 192.168.49.0/24 created
	I0708 23:02:05.548749  258367 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210708230204-257783" container
	I0708 23:02:05.548819  258367 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0708 23:02:05.580386  258367 cli_runner.go:115] Run: docker volume create addons-20210708230204-257783 --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --label created_by.minikube.sigs.k8s.io=true
	I0708 23:02:05.612609  258367 oci.go:102] Successfully created a docker volume addons-20210708230204-257783
	I0708 23:02:05.612679  258367 cli_runner.go:115] Run: docker run --rm --name addons-20210708230204-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --entrypoint /usr/bin/test -v addons-20210708230204-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0708 23:02:08.695369  258367 cli_runner.go:168] Completed: docker run --rm --name addons-20210708230204-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --entrypoint /usr/bin/test -v addons-20210708230204-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (3.082656193s)
	I0708 23:02:08.695392  258367 oci.go:106] Successfully prepared a docker volume addons-20210708230204-257783
	W0708 23:02:08.695416  258367 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0708 23:02:08.695423  258367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0708 23:02:08.695474  258367 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0708 23:02:08.695676  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:08.695770  258367 kic.go:179] Starting extracting preloaded images to volume ...
	I0708 23:02:08.695817  258367 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210708230204-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0708 23:02:08.824606  258367 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210708230204-257783 --name addons-20210708230204-257783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210708230204-257783 --network addons-20210708230204-257783 --ip 192.168.49.2 --volume addons-20210708230204-257783:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0708 23:02:09.490697  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Running}}
	I0708 23:02:09.549362  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:09.602057  258367 cli_runner.go:115] Run: docker exec addons-20210708230204-257783 stat /var/lib/dpkg/alternatives/iptables
	I0708 23:02:09.698879  258367 oci.go:278] the created container "addons-20210708230204-257783" has a running status.
	I0708 23:02:09.698906  258367 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa...
	I0708 23:02:10.039045  258367 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0708 23:02:10.218918  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:10.268015  258367 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0708 23:02:10.268054  258367 kic_runner.go:115] Args: [docker exec --privileged addons-20210708230204-257783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0708 23:02:19.993425  258367 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210708230204-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (11.297574532s)
	I0708 23:02:19.993449  258367 kic.go:188] duration metric: took 11.297752 seconds to extract preloaded images to volume
	I0708 23:02:19.993522  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:20.033201  258367 machine.go:88] provisioning docker machine ...
	I0708 23:02:20.033239  258367 ubuntu.go:169] provisioning hostname "addons-20210708230204-257783"
	I0708 23:02:20.033296  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:20.073522  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:20.073698  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:20.073711  258367 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210708230204-257783 && echo "addons-20210708230204-257783" | sudo tee /etc/hostname
	I0708 23:02:20.195296  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210708230204-257783
	
	I0708 23:02:20.195361  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:20.239296  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:20.239447  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:20.239473  258367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210708230204-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210708230204-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210708230204-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:02:20.358452  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:02:20.358475  258367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:02:20.358493  258367 ubuntu.go:177] setting up certificates
	I0708 23:02:20.358501  258367 provision.go:83] configureAuth start
	I0708 23:02:20.358550  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:20.392386  258367 provision.go:137] copyHostCerts
	I0708 23:02:20.392450  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:02:20.392535  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:02:20.392595  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:02:20.392646  258367 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.addons-20210708230204-257783 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210708230204-257783]
	I0708 23:02:21.241180  258367 provision.go:171] copyRemoteCerts
	I0708 23:02:21.241232  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:02:21.241271  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.274111  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.353415  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:02:21.367359  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:02:21.381206  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 23:02:21.395073  258367 provision.go:86] duration metric: configureAuth took 1.036561369s
	I0708 23:02:21.395089  258367 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:02:21.395329  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.428650  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:21.428842  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:21.428857  258367 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:02:21.545263  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:02:21.545312  258367 machine.go:91] provisioned docker machine in 1.512083207s
	I0708 23:02:21.545325  258367 client.go:171] LocalClient.Create took 16.659321464s
	I0708 23:02:21.545341  258367 start.go:168] duration metric: libmachine.API.Create for "addons-20210708230204-257783" took 16.659372186s
	I0708 23:02:21.545355  258367 start.go:267] post-start starting for "addons-20210708230204-257783" (driver="docker")
	I0708 23:02:21.545361  258367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:02:21.545424  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:02:21.545473  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.578362  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.657484  258367 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:02:21.659635  258367 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:02:21.659659  258367 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:02:21.659671  258367 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:02:21.659680  258367 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:02:21.659688  258367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:02:21.659746  258367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:02:21.659773  258367 start.go:270] post-start completed in 114.410955ms
	I0708 23:02:21.660043  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:21.693487  258367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json ...
	I0708 23:02:21.693686  258367 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:02:21.693730  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.726619  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.803192  258367 start.go:129] duration metric: createHost completed in 16.919928817s
	I0708 23:02:21.803212  258367 start.go:80] releasing machines lock for "addons-20210708230204-257783", held for 16.920036259s
	I0708 23:02:21.803282  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:21.836270  258367 ssh_runner.go:149] Run: systemctl --version
	I0708 23:02:21.836314  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.836333  258367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:02:21.836381  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.875785  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.876742  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.955373  258367 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:02:22.089407  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:02:22.097604  258367 docker.go:153] disabling docker service ...
	I0708 23:02:22.097648  258367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:02:22.106161  258367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:02:22.113999  258367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:02:22.187402  258367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:02:22.276539  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:02:22.284617  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:02:22.295874  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:02:22.302331  258367 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:02:22.302355  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:02:22.309020  258367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:02:22.314349  258367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:02:22.319464  258367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:02:22.399957  258367 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:02:22.574141  258367 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:02:22.574208  258367 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:02:22.576998  258367 start.go:411] Will wait 60s for crictl version
	I0708 23:02:22.577041  258367 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:02:22.602143  258367 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:02:22.602207  258367 ssh_runner.go:149] Run: crio --version
	I0708 23:02:22.668574  258367 ssh_runner.go:149] Run: crio --version
	I0708 23:02:22.736496  258367 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:02:22.736572  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:02:22.768937  258367 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0708 23:02:22.771623  258367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 23:02:22.779326  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:22.779408  258367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:02:22.833839  258367 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:02:22.833860  258367 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:02:22.833905  258367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:02:22.855250  258367 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:02:22.855269  258367 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:02:22.855326  258367 ssh_runner.go:149] Run: crio config
	I0708 23:02:22.926289  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:22.926310  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:22.926319  258367 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:02:22.926333  258367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210708230204-257783 NodeName:addons-20210708230204-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:02:22.926456  258367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210708230204-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:02:22.926544  258367 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210708230204-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:02:22.926598  258367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:02:22.932481  258367 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:02:22.932526  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:02:22.937866  258367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0708 23:02:22.948342  258367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:02:22.958723  258367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
	I0708 23:02:22.968914  258367 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:02:22.971253  258367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 23:02:22.978510  258367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783 for IP: 192.168.49.2
	I0708 23:02:22.978544  258367 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:02:23.190106  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt ...
	I0708 23:02:23.190133  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt: {Name:mk5906ee5301ffc572d7fce2bd29e40064ac492c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.190305  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key ...
	I0708 23:02:23.190322  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key: {Name:mkb3a034c656a399e8a3b1d9af8b8f2247a84d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.190411  258367 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:02:23.411680  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt ...
	I0708 23:02:23.411701  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt: {Name:mkfacfbb209518217be8fd06056f51a62e70f58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.411817  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key ...
	I0708 23:02:23.411832  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key: {Name:mk60c06d0c1c23937aa87c9d7bc9822baf022041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.411935  258367 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key
	I0708 23:02:23.411946  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt with IP's: []
	I0708 23:02:23.821741  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt ...
	I0708 23:02:23.821759  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: {Name:mk73acb25b69f7dc2f7fe66039431368600627ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.821888  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key ...
	I0708 23:02:23.821902  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key: {Name:mk7826adaa22e4c26a84eba8050bc8619fdb79db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.821984  258367 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2
	I0708 23:02:23.821992  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0708 23:02:24.106323  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 ...
	I0708 23:02:24.106346  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2: {Name:mk3bbb246fdb644e03c711331594b91b252c5977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.106500  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2 ...
	I0708 23:02:24.106515  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2: {Name:mk07bce1d3519cbcd08d7913590fefe97615f3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.106597  258367 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt
	I0708 23:02:24.106652  258367 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key
	I0708 23:02:24.106701  258367 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key
	I0708 23:02:24.106710  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt with IP's: []
	I0708 23:02:24.505496  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt ...
	I0708 23:02:24.505515  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt: {Name:mk1f4245b8de4cb8f5296cbf241b13df7d0321b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.505635  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key ...
	I0708 23:02:24.505649  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key: {Name:mk879c3b7560b49027151dcf6f41f1374ceeca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.505807  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:02:24.505843  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:02:24.505870  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:02:24.505903  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:02:24.506929  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:02:24.521556  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 23:02:24.535315  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:02:24.549240  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:02:24.566238  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:02:24.579920  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:02:24.593499  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:02:24.607144  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:02:24.621166  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:02:24.635126  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:02:24.645377  258367 ssh_runner.go:149] Run: openssl version
	I0708 23:02:24.649491  258367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:02:24.655325  258367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.657850  258367 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.657899  258367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.661967  258367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:02:24.667605  258367 kubeadm.go:390] StartCluster: {Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:02:24.667676  258367 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:02:24.667724  258367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:02:24.689946  258367 cri.go:76] found id: ""
	I0708 23:02:24.689998  258367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:02:24.695552  258367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 23:02:24.700899  258367 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0708 23:02:24.700938  258367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 23:02:24.706313  258367 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 23:02:24.706345  258367 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0708 23:02:51.639152  258367 out.go:192]   - Generating certificates and keys ...
	I0708 23:02:51.642275  258367 out.go:192]   - Booting up control plane ...
	I0708 23:02:51.645712  258367 out.go:192]   - Configuring RBAC rules ...
	I0708 23:02:51.648194  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:51.648207  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:51.650911  258367 out.go:165] * Configuring CNI (Container Networking Interface) ...
	I0708 23:02:51.650974  258367 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0708 23:02:51.654228  258367 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.2/kubectl ...
	I0708 23:02:51.654241  258367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0708 23:02:51.665340  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 23:02:52.178474  258367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 23:02:52.178524  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.178573  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5 minikube.k8s.io/name=addons-20210708230204-257783 minikube.k8s.io/updated_at=2021_07_08T23_02_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.334017  258367 ops.go:34] apiserver oom_adj: -16
	I0708 23:02:52.334150  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.926162  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:53.425745  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:53.926612  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:54.426349  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:54.926224  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:55.426682  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:55.926123  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:56.426433  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:56.926151  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:57.425844  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:57.925720  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:58.425779  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:58.926601  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:59.426434  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:59.925788  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:00.425782  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:00.926167  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:01.425732  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:01.925980  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:02.426344  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:02.926043  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:03.425924  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:03.926241  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:04.425728  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:04.528289  258367 kubeadm.go:985] duration metric: took 12.349811507s to wait for elevateKubeSystemPrivileges.
	I0708 23:03:04.528311  258367 kubeadm.go:392] StartCluster complete in 39.860709437s
	I0708 23:03:04.528325  258367 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:03:04.528427  258367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:03:04.528864  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:03:05.058397  258367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210708230204-257783" rescaled to 1
	I0708 23:03:05.058451  258367 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:03:05.061804  258367 out.go:165] * Verifying Kubernetes components...
	I0708 23:03:05.061859  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:03:05.058488  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:03:05.058693  258367 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0708 23:03:05.061990  258367 addons.go:59] Setting volumesnapshots=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.062007  258367 addons.go:135] Setting addon volumesnapshots=true in "addons-20210708230204-257783"
	I0708 23:03:05.062033  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.062522  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.062625  258367 addons.go:59] Setting ingress=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.062640  258367 addons.go:135] Setting addon ingress=true in "addons-20210708230204-257783"
	I0708 23:03:05.062668  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.063063  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.063653  258367 addons.go:59] Setting metrics-server=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.063680  258367 addons.go:135] Setting addon metrics-server=true in "addons-20210708230204-257783"
	I0708 23:03:05.063738  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.064183  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.064242  258367 addons.go:59] Setting olm=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.064258  258367 addons.go:135] Setting addon olm=true in "addons-20210708230204-257783"
	I0708 23:03:05.064273  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.064666  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.064716  258367 addons.go:59] Setting registry=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.064729  258367 addons.go:135] Setting addon registry=true in "addons-20210708230204-257783"
	I0708 23:03:05.064744  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.065116  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065165  258367 addons.go:59] Setting storage-provisioner=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065177  258367 addons.go:135] Setting addon storage-provisioner=true in "addons-20210708230204-257783"
	W0708 23:03:05.065182  258367 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:03:05.065199  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.065568  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065625  258367 addons.go:59] Setting default-storageclass=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065639  258367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210708230204-257783"
	I0708 23:03:05.065837  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065881  258367 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065904  258367 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210708230204-257783"
	I0708 23:03:05.065927  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.066288  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.066342  258367 addons.go:59] Setting gcp-auth=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.098501  258367 mustload.go:65] Loading cluster: addons-20210708230204-257783
	I0708 23:03:05.098919  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.371366  258367 out.go:165]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0708 23:03:05.378321  258367 out.go:165]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0708 23:03:05.380422  258367 out.go:165]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0708 23:03:05.380489  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0708 23:03:05.380503  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0708 23:03:05.380556  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.377529  258367 addons.go:135] Setting addon default-storageclass=true in "addons-20210708230204-257783"
	W0708 23:03:05.381616  258367 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:03:05.381653  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.382135  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.400974  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0708 23:03:05.401045  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0708 23:03:05.401062  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0708 23:03:05.401113  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.419618  258367 out.go:165]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0708 23:03:05.419690  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 23:03:05.419788  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0708 23:03:05.419842  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.461104  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.461570  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0708 23:03:05.461617  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.465100  258367 out.go:165]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0708 23:03:05.468585  258367 out.go:165]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0708 23:03:05.487461  258367 node_ready.go:35] waiting up to 6m0s for node "addons-20210708230204-257783" to be "Ready" ...
	I0708 23:03:05.492140  258367 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:03:05.492222  258367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:03:05.492230  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:03:05.492276  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.504847  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 23:03:05.515760  258367 out.go:165]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0708 23:03:05.519387  258367 out.go:165]   - Using image registry:2.7.1
	I0708 23:03:05.519516  258367 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0708 23:03:05.519538  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0708 23:03:05.519599  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.661979  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0708 23:03:05.661925  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.661958  258367 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0708 23:03:05.667742  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0708 23:03:05.667802  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.670800  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0708 23:03:05.676575  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0708 23:03:05.683758  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0708 23:03:05.688314  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0708 23:03:05.694203  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0708 23:03:05.701383  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0708 23:03:05.708144  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0708 23:03:05.714269  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0708 23:03:05.714329  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0708 23:03:05.714337  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0708 23:03:05.714395  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.847067  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.879578  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.938388  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.943788  258367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:03:05.943837  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:03:05.943906  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.963212  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.963638  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.968031  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.033345  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.085378  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.088288  258367 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0708 23:03:06.088302  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0708 23:03:06.112546  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:03:06.168852  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0708 23:03:06.184062  258367 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0708 23:03:06.184080  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0708 23:03:06.201809  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 23:03:06.201825  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0708 23:03:06.221468  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0708 23:03:06.221486  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0708 23:03:06.246330  258367 addons.go:135] Setting addon gcp-auth=true in "addons-20210708230204-257783"
	I0708 23:03:06.246370  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:06.246846  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:06.288342  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0708 23:03:06.290388  258367 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0708 23:03:06.290404  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0708 23:03:06.297690  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0708 23:03:06.297703  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0708 23:03:06.300511  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0708 23:03:06.300523  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0708 23:03:06.309464  258367 out.go:165]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0708 23:03:06.312084  258367 out.go:165]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0708 23:03:06.312130  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0708 23:03:06.312141  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0708 23:03:06.312185  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:06.311243  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 23:03:06.312350  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0708 23:03:06.315934  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0708 23:03:06.315947  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0708 23:03:06.333850  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0708 23:03:06.333866  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0708 23:03:06.340033  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0708 23:03:06.347208  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:03:06.368086  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.373362  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 23:03:06.373381  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0708 23:03:06.407298  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0708 23:03:06.407832  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0708 23:03:06.407846  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0708 23:03:06.433603  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0708 23:03:06.433620  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0708 23:03:06.459997  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 23:03:06.521614  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0708 23:03:06.521634  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0708 23:03:06.526576  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0708 23:03:06.526592  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0708 23:03:06.630213  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0708 23:03:06.630233  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0708 23:03:06.665127  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0708 23:03:06.665146  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0708 23:03:06.730299  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0708 23:03:06.730316  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0708 23:03:06.757125  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0708 23:03:06.757143  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0708 23:03:06.802645  258367 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:06.802661  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0708 23:03:06.832185  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 23:03:06.832201  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0708 23:03:06.877465  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0708 23:03:06.877483  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0708 23:03:06.959711  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 23:03:06.962339  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:07.007218  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0708 23:03:07.007279  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0708 23:03:07.096016  258367 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591115427s)
	I0708 23:03:07.096079  258367 start.go:730] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0708 23:03:07.129552  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0708 23:03:07.129605  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0708 23:03:07.221658  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0708 23:03:07.221716  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0708 23:03:07.325389  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0708 23:03:07.325435  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0708 23:03:07.426703  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 23:03:07.426761  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0708 23:03:07.442363  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.329793015s)
	I0708 23:03:07.524298  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 23:03:07.757844  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:08.324926  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.036545542s)
	I0708 23:03:08.324953  258367 addons.go:313] Verifying addon registry=true in "addons-20210708230204-257783"
	I0708 23:03:08.327648  258367 out.go:165] * Verifying registry addon...
	I0708 23:03:08.329284  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0708 23:03:08.430886  258367 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 23:03:08.430908  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:08.965833  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:09.475295  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:09.989922  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:10.040204  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:10.520729  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:10.760315  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.413074224s)
	I0708 23:03:10.760389  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.42033799s)
	I0708 23:03:10.760409  258367 addons.go:313] Verifying addon ingress=true in "addons-20210708230204-257783"
	I0708 23:03:10.763179  258367 out.go:165] * Verifying ingress addon...
	I0708 23:03:10.764874  258367 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0708 23:03:10.813355  258367 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 23:03:10.813398  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:10.954984  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:11.316902  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:11.437680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:11.816977  258367 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 23:03:11.816992  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:11.933802  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.317784  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:12.440866  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.562931  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:12.826050  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:12.960758  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.989544  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (6.582214242s)
	W0708 23:03:12.989579  258367 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0708 23:03:12.989597  258367 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0708 23:03:12.989682  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.529663613s)
	I0708 23:03:12.989696  258367 addons.go:313] Verifying addon metrics-server=true in "addons-20210708230204-257783"
	I0708 23:03:12.989760  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.029992828s)
	I0708 23:03:12.989771  258367 addons.go:313] Verifying addon gcp-auth=true in "addons-20210708230204-257783"
	I0708 23:03:12.992443  258367 out.go:165] * Verifying gcp-auth addon...
	I0708 23:03:12.994031  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0708 23:03:12.990200  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.027809025s)
	W0708 23:03:12.994192  258367 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0708 23:03:12.994206  258367 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0708 23:03:13.031412  258367 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0708 23:03:13.031426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:13.266238  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0708 23:03:13.280326  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.755950239s)
	I0708 23:03:13.280349  258367 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210708230204-257783"
	I0708 23:03:13.282866  258367 out.go:165] * Verifying csi-hostpath-driver addon...
	I0708 23:03:13.284710  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0708 23:03:13.300997  258367 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0708 23:03:13.301018  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:13.323810  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:13.355033  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:13.456204  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:13.709988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:13.828512  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:13.829141  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:13.935230  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:14.055589  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:14.305521  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:14.316503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:14.435410  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:14.537594  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:14.807762  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:14.828786  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:14.871664  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.516602534s)
	I0708 23:03:14.871693  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.605427862s)
	I0708 23:03:14.934512  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:15.028279  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:15.033924  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:15.306879  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:15.316769  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:15.435470  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:15.541639  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:15.807294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:15.816837  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:15.933956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:16.034246  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:16.309311  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:16.321752  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:16.434021  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:16.533676  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:16.805508  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:16.815731  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:16.933818  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:17.033326  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:17.305073  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:17.316483  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:17.434168  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:17.528224  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:17.533933  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:17.804563  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:17.815690  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:17.934352  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:18.033474  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:18.306836  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:18.315822  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:18.433636  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:18.534190  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:18.878422  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:18.879963  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:18.934171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:19.033713  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:19.306039  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:19.316562  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:19.433898  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:19.533217  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:19.804900  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:19.815673  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:19.934357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:20.028418  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:20.033110  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:20.307396  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:20.315940  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:20.434357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:20.533275  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:20.805517  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:20.815912  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:20.934171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:21.033363  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:21.305599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:21.315940  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:21.434182  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:21.533779  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:21.806276  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:21.816677  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:21.934247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:22.028451  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:22.033427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:22.306106  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:22.316600  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:22.434222  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:22.534180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:22.805282  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:22.816023  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:22.934458  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:23.034254  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:23.305403  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:23.315955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:23.434280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:23.533816  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:23.805762  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:23.816193  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:23.933917  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:24.033525  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:24.305586  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:24.316060  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:24.434308  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:24.528203  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:24.533860  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:24.804747  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:24.816238  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:24.934405  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:25.033542  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:25.305381  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:25.315630  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:25.434115  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:25.533435  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:25.805258  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:25.815583  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:25.934247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:26.033468  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:26.313468  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:26.316207  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:26.434177  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:26.533378  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:26.805481  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:26.815971  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:26.933457  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:27.028244  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:27.033944  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:27.305978  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:27.316514  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:27.433572  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:27.533438  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:27.808338  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:27.815850  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:27.934256  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:28.033462  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:28.305603  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:28.316131  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:28.434164  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:28.533546  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:28.805406  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:28.815807  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.027114  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:29.028529  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:29.033307  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:29.305177  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:29.316667  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.434163  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:29.533817  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:29.804531  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:29.815964  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.934253  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:30.034141  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:30.304975  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:30.316394  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:30.434247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:30.533988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:30.804599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:30.815861  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:30.934212  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:31.033880  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:31.305080  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:31.316591  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:31.438379  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:31.528481  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:31.534085  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:31.805329  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:31.815830  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:31.934243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:32.033930  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:32.305955  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:32.316622  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:32.433806  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:32.534171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:32.805098  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:32.816697  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:32.934425  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:33.033830  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:33.305586  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:33.316129  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:33.434296  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:33.533867  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:33.805664  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:33.816014  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:33.934202  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:34.027838  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:34.033689  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:34.312483  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:34.325172  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:34.433975  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:34.533845  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:34.805613  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:34.816177  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:34.934061  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:35.033908  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:35.304911  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:35.316503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:35.433830  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:35.533555  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:35.805708  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:35.816011  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:35.934894  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:36.033706  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:36.305768  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:36.316210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:36.433880  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:36.527982  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:36.533718  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:36.806281  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:36.816802  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:36.934001  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:37.033899  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:37.305715  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:37.316438  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:37.434076  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:37.534167  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:37.805363  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:37.815663  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:37.934569  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:38.033576  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:38.305386  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:38.315919  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:38.434305  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:38.528076  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:38.533725  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:38.805854  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:38.816199  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:38.933965  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:39.161933  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:39.305111  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:39.316710  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:39.433757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:39.533280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:39.805180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:39.815640  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:39.933837  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:40.033359  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:40.305820  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:40.316500  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:40.434764  258367 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 23:03:40.434780  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:40.529064  258367 node_ready.go:49] node "addons-20210708230204-257783" has status "Ready":"True"
	I0708 23:03:40.529082  258367 node_ready.go:38] duration metric: took 35.041601595s waiting for node "addons-20210708230204-257783" to be "Ready" ...
	I0708 23:03:40.529090  258367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:03:40.536245  258367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:40.538782  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:40.805427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:40.815950  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:40.935684  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:41.033391  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:41.305384  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:41.315751  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:41.434230  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:41.534294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:41.805062  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:41.816413  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:41.934437  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:42.033426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:42.305936  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:42.316832  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:42.452020  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:42.537291  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:42.556855  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:42.808971  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:42.837617  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:42.948502  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:43.042597  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:43.306461  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:43.316194  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:43.435340  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:43.545175  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:43.808213  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:43.816653  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:43.939445  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:44.036294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:44.310153  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:44.316861  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:44.452869  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:44.534309  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:44.858439  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:44.859060  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:44.980373  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:45.033572  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:45.055378  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:45.307109  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:45.317405  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:45.436040  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:45.535852  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:45.813865  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:45.822168  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:45.938855  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:46.034757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:46.307667  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:46.318795  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:46.434514  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:46.534105  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:46.808438  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:46.816223  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:46.934772  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:47.047317  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:47.055451  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:47.309428  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:47.326146  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:47.435940  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:47.533825  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:47.818764  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:47.819392  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:47.939958  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:48.035140  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:48.310806  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:48.322692  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:48.435268  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:48.534774  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:48.805303  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:48.816955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:48.934285  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:49.034137  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:49.305083  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:49.317534  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:49.434132  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:49.534176  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:49.553147  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:49.806795  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:49.816930  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:49.935353  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:50.044577  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:50.308970  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:50.317116  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:50.453642  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:50.537013  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:50.809687  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:50.816605  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:50.934956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:51.034111  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:51.306357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:51.316788  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:51.434223  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:51.533862  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:51.555405  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:51.806049  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:51.816665  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:51.935239  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:52.036764  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:52.306561  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:52.317258  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:52.435120  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:52.534680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:52.812738  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:52.823062  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:52.935048  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:53.088180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:53.313593  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:53.316210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:53.435174  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:53.534856  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:53.568302  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:53.814037  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:53.828699  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:53.935094  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:54.034051  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:54.305943  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:54.316383  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:54.434884  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:54.533560  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:54.805349  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:54.816025  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:54.935440  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:55.035068  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:55.305204  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:55.316789  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:55.434307  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:55.534194  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:55.805524  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:55.816474  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:55.938616  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:56.035071  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:56.054514  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:56.306243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:56.317135  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:56.437740  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:56.534893  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:56.808396  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:56.818595  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:56.935008  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:57.033737  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:57.305195  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:57.316759  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:57.434680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:57.534533  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:57.805918  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:57.816886  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:57.934412  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:58.034045  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:58.053423  258367 pod_ready.go:92] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.053448  258367 pod_ready.go:81] duration metric: took 17.517183186s waiting for pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.053472  258367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.056899  258367 pod_ready.go:92] pod "etcd-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.056912  258367 pod_ready.go:81] duration metric: took 3.428532ms waiting for pod "etcd-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.056924  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.060405  258367 pod_ready.go:92] pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.060421  258367 pod_ready.go:81] duration metric: took 3.48906ms waiting for pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.060430  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.063897  258367 pod_ready.go:92] pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.063911  258367 pod_ready.go:81] duration metric: took 3.473676ms waiting for pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.063920  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dvf4" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.067194  258367 pod_ready.go:92] pod "kube-proxy-6dvf4" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.067211  258367 pod_ready.go:81] duration metric: took 3.28452ms waiting for pod "kube-proxy-6dvf4" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.067219  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.305241  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:58.316828  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:58.433981  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:58.452430  258367 pod_ready.go:92] pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.452441  258367 pod_ready.go:81] duration metric: took 385.215878ms waiting for pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.452450  258367 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.534269  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:58.805326  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:58.817091  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:58.934377  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:59.034175  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:59.350171  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:59.352070  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:59.434599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:59.534222  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:59.805415  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:59.815966  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:59.934956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:00.034180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:00.309488  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:00.318040  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:00.446679  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:00.535081  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:00.816881  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:00.833611  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:00.870726  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:00.940553  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:01.034681  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:01.307110  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:01.317003  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:01.434206  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:01.534362  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:01.807693  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:01.816875  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:01.934543  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:02.034256  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:02.314645  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:02.317830  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:02.434638  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:02.534555  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:02.805757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:02.816388  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:02.934273  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:03.034346  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:03.306135  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:03.316729  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:03.356699  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:03.434801  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:03.533628  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:03.806090  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:03.817066  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:03.935092  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:04.033904  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:04.308003  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:04.321774  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:04.437942  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:04.537834  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:04.811895  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:04.820598  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:04.941910  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:05.033681  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:05.306843  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:05.316309  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:05.434665  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:05.534399  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:05.808209  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:05.828690  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:05.867140  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:05.948427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:06.034144  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:06.310817  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:06.317048  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:06.438453  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:06.538009  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:06.814561  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:06.818503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:06.934667  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:07.034023  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:07.324320  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:07.329971  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:07.435453  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:07.534858  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:07.806398  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:07.816291  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:07.936272  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:08.045241  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:08.321615  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:08.322966  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:08.361134  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:08.444857  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:08.539896  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:08.810573  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:08.824748  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:08.950445  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:09.038119  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:09.309840  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:09.319094  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:09.451790  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:09.534821  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:09.806319  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:09.824601  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:09.948537  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:10.038714  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:10.306101  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:10.316712  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:10.444324  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:10.534107  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:10.816348  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:10.821169  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:10.880798  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:10.938689  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:11.035004  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:11.309429  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:11.316532  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:11.437001  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:11.534517  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:11.806765  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:11.817111  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:11.936678  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:12.035677  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:12.315185  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:12.319448  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:12.435665  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:12.533892  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:12.806413  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:12.816140  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:12.935226  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:13.033973  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:13.308652  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:13.318458  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:13.359912  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:13.434988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:13.535000  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:13.807981  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:13.817954  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:13.939485  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:14.035287  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:14.310831  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:14.317343  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:14.434300  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:14.533986  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:14.806121  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:14.817043  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:14.934166  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:15.034426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:15.323217  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:15.333228  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:15.361384  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:15.435068  258367 kapi.go:108] duration metric: took 1m7.105782912s to wait for kubernetes.io/minikube-addons=registry ...
	I0708 23:04:15.537812  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:15.813281  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:15.819581  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:16.049106  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:16.307540  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:16.316183  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:16.534458  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:16.806532  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:16.818477  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:17.033945  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:17.305444  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:17.315952  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:17.391245  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:17.536664  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:17.807013  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:17.817134  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:18.033576  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:18.307317  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:18.316474  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:18.534860  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:18.806288  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:18.821696  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.034372  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:19.312150  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:19.320785  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.539562  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:19.809151  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:19.819730  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.859154  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:20.034514  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:20.308034  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:20.318688  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:20.537429  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:20.828970  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:20.835630  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.037624  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:21.308202  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:21.317118  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.536565  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:21.835761  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:21.836860  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.860825  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:22.053067  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:22.322252  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:22.328107  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:22.537575  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:22.807319  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:22.817028  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:23.034601  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:23.306551  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:23.317288  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:23.534889  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:23.806323  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:23.817210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:24.034124  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:24.305483  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:24.316211  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:24.355999  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:24.535624  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:24.807872  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:24.818209  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:25.037306  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:25.311421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:25.322366  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:25.533957  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:25.805852  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:25.816446  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:26.034421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:26.306096  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:26.316792  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:26.356696  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:26.534030  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:26.806374  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:26.817396  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:27.034314  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:27.341253  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:27.348955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:27.544750  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:27.816243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:27.830215  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:28.037855  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:28.314775  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:28.332322  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:28.374180  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:28.545092  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:28.806885  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:28.817006  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:29.049276  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:29.307196  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:29.317899  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:29.714913  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:29.813421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:29.822469  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.034706  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:30.307815  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:30.317761  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.376585  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:30.536075  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:30.819025  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.820506  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:30.861001  258367 pod_ready.go:92] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"True"
	I0708 23:04:30.861017  258367 pod_ready.go:81] duration metric: took 32.408556096s waiting for pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace to be "Ready" ...
	I0708 23:04:30.861035  258367 pod_ready.go:38] duration metric: took 50.331926706s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:04:30.861054  258367 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:04:30.861071  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:30.861149  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:31.039139  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:31.120445  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:31.120490  258367 cri.go:76] found id: ""
	I0708 23:04:31.120510  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:31.120582  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.126428  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:31.126503  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:31.157908  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:31.157946  258367 cri.go:76] found id: ""
	I0708 23:04:31.157962  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:31.158024  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.160719  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:31.160800  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:31.195992  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:31.196010  258367 cri.go:76] found id: ""
	I0708 23:04:31.196015  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:31.196063  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.198761  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:31.198825  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:31.239007  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:31.239025  258367 cri.go:76] found id: ""
	I0708 23:04:31.239030  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:31.239073  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.241734  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:31.241798  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:31.272832  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:31.272850  258367 cri.go:76] found id: ""
	I0708 23:04:31.272856  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:31.272900  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.275666  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:31.275734  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:31.301615  258367 cri.go:76] found id: ""
	I0708 23:04:31.301628  258367 logs.go:270] 0 containers: []
	W0708 23:04:31.301634  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:31.301641  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:31.301678  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:31.311401  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:31.322172  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:31.346810  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:31.346834  258367 cri.go:76] found id: ""
	I0708 23:04:31.346840  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:31.346879  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.349712  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:31.349757  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:31.373832  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:31.373850  258367 cri.go:76] found id: ""
	I0708 23:04:31.373856  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:31.373899  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.376711  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:31.376736  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:31.414919  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:31.414940  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:31.472616  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:31.472645  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:31.534862  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:31.614324  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:31.614347  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:31.696068  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:31.697579  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:31.733579  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:31.733602  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:31.808651  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:31.826645  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:31.826667  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:31.830534  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:31.860747  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:31.860772  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:31.896690  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:31.896730  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:31.927741  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:31.927787  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:31.997716  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:31.997741  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:32.056732  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:32.056755  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:32.079686  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:32.360060  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:32.372384  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:32.406945  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:32.406966  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:32.447186  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:32.447204  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:32.447320  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:32.447329  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:32.447336  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:32.447342  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:32.447346  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:04:32.535609  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:32.871932  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:32.875314  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.035264  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:33.322729  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:33.325758  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.539049  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:33.809122  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.840811  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:34.038231  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:34.305821  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:34.316472  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:34.536871  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:34.806324  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:34.817609  258367 kapi.go:108] duration metric: took 1m24.052734457s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0708 23:04:35.034113  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:35.305618  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:35.534039  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:35.805464  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:36.033757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:36.305563  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:36.535022  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:36.810065  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:37.044021  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:37.306365  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:37.534507  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:37.806247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:38.034711  258367 kapi.go:108] duration metric: took 1m25.040677396s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0708 23:04:38.037692  258367 out.go:165] * Your GCP credentials will now be mounted into every pod created in the addons-20210708230204-257783 cluster.
	I0708 23:04:38.039905  258367 out.go:165] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0708 23:04:38.043013  258367 out.go:165] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0708 23:04:38.305726  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:38.805442  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:39.313290  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:39.806014  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:40.305951  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:40.805935  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:41.306167  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:41.807569  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.306223  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.448401  258367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:04:42.471087  258367 api_server.go:70] duration metric: took 1m37.412609728s to wait for apiserver process to appear ...
	I0708 23:04:42.471136  258367 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:04:42.471163  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:42.471209  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:42.498226  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:42.498240  258367 cri.go:76] found id: ""
	I0708 23:04:42.498245  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:42.498287  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.500629  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:42.500670  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:42.522032  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:42.522070  258367 cri.go:76] found id: ""
	I0708 23:04:42.522089  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:42.522123  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.524515  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:42.524555  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:42.544404  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:42.544416  258367 cri.go:76] found id: ""
	I0708 23:04:42.544421  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:42.544452  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.546783  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:42.546847  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:42.566390  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:42.566406  258367 cri.go:76] found id: ""
	I0708 23:04:42.566410  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:42.566444  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.568814  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:42.568853  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:42.589259  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:42.589295  258367 cri.go:76] found id: ""
	I0708 23:04:42.589306  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:42.589338  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.591563  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:42.591603  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:42.614367  258367 cri.go:76] found id: ""
	I0708 23:04:42.614381  258367 logs.go:270] 0 containers: []
	W0708 23:04:42.614386  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:42.614393  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:42.614447  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:42.635565  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:42.635602  258367 cri.go:76] found id: ""
	I0708 23:04:42.635617  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:42.635661  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.638113  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:42.638155  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:42.658400  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:42.658416  258367 cri.go:76] found id: ""
	I0708 23:04:42.658420  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:42.658462  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.660879  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:42.660896  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:42.804504  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:42.804554  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:42.813337  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.865302  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:42.865326  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:42.890504  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:42.890524  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:42.910953  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:42.910972  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:42.931942  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:42.931963  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:42.960735  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:42.960775  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:43.006287  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:43.006333  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:43.045345  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:43.045367  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:43.069858  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:43.069878  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:43.111980  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:43.112002  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:43.206087  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:43.206109  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:43.295091  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:43.296592  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:43.309878  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:43.337457  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:43.337472  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:43.337580  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:43.337591  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:43.337599  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:43.337608  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:43.337613  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:04:43.806280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:44.307515  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:44.805825  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:45.305526  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:45.805382  258367 kapi.go:108] duration metric: took 1m32.520669956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0708 23:04:45.807522  258367 out.go:165] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, volumesnapshots, olm, registry, ingress, gcp-auth, csi-hostpath-driver
	I0708 23:04:45.807541  258367 addons.go:344] enableAddons completed in 1m40.748851438s
	I0708 23:04:53.338820  258367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0708 23:04:53.347181  258367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0708 23:04:53.348070  258367 api_server.go:139] control plane version: v1.21.2
	I0708 23:04:53.348089  258367 api_server.go:129] duration metric: took 10.876941605s to wait for apiserver health ...
	I0708 23:04:53.348098  258367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:04:53.348115  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:53.348166  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:53.375754  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:53.375767  258367 cri.go:76] found id: ""
	I0708 23:04:53.375772  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:53.375811  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.378392  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:53.378435  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:53.398815  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:53.398829  258367 cri.go:76] found id: ""
	I0708 23:04:53.398833  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:53.398865  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.401349  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:53.401392  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:53.421390  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:53.421404  258367 cri.go:76] found id: ""
	I0708 23:04:53.421409  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:53.421442  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.423799  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:53.423844  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:53.443510  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:53.443526  258367 cri.go:76] found id: ""
	I0708 23:04:53.443531  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:53.443560  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.445900  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:53.445940  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:53.466255  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:53.466268  258367 cri.go:76] found id: ""
	I0708 23:04:53.466273  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:53.466303  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.468712  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:53.468766  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:53.488311  258367 cri.go:76] found id: ""
	I0708 23:04:53.488323  258367 logs.go:270] 0 containers: []
	W0708 23:04:53.488328  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:53.488342  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:53.488393  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:53.508339  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:53.508353  258367 cri.go:76] found id: ""
	I0708 23:04:53.508357  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:53.508388  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.510777  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:53.510819  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:53.530634  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:53.530668  258367 cri.go:76] found id: ""
	I0708 23:04:53.530682  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:53.530721  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.533156  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:53.533169  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:53.558018  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:53.558035  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:53.581895  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:53.581912  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:53.633038  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:53.633079  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:53.661558  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:53.661578  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:53.686131  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:53.686151  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:53.711706  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:53.711749  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:53.756467  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:53.756491  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:53.822558  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:53.824068  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:53.869132  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:53.869150  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:53.908521  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:53.908541  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:54.042355  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:54.042381  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:54.143221  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:54.143246  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:54.173768  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:54.173789  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:54.173883  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:54.173893  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:54.173901  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:54.173911  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:54.173915  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:05:04.184611  258367 system_pods.go:59] 18 kube-system pods found
	I0708 23:05:04.184638  258367 system_pods.go:61] "coredns-558bd4d5db-zhg8q" [fbfe6d76-09e7-4b56-8c35-638662f1daaf] Running
	I0708 23:05:04.184644  258367 system_pods.go:61] "csi-hostpath-attacher-0" [06368aac-1744-4da9-99a2-47bfbb93254b] Running
	I0708 23:05:04.184648  258367 system_pods.go:61] "csi-hostpath-provisioner-0" [c6e3d45b-6ca3-4462-b533-40a15141f9ed] Running
	I0708 23:05:04.184653  258367 system_pods.go:61] "csi-hostpath-resizer-0" [58e0abcf-e21c-4040-8d81-a9d93f509885] Running
	I0708 23:05:04.184657  258367 system_pods.go:61] "csi-hostpath-snapshotter-0" [b4319b60-be58-4608-ba01-f1a8fac1d376] Running
	I0708 23:05:04.184667  258367 system_pods.go:61] "csi-hostpathplugin-0" [d94ef600-bad7-4301-b190-602faa1f36a9] Running
	I0708 23:05:04.184672  258367 system_pods.go:61] "etcd-addons-20210708230204-257783" [cf116694-48f9-416c-a5dc-55fdf60853ea] Running
	I0708 23:05:04.184679  258367 system_pods.go:61] "kindnet-ccnc6" [b4d243ff-dec0-4629-b9b4-ae527c2c32bd] Running
	I0708 23:05:04.184684  258367 system_pods.go:61] "kube-apiserver-addons-20210708230204-257783" [f058b0e7-2349-4f21-8599-37d21a5ddcd9] Running
	I0708 23:05:04.184695  258367 system_pods.go:61] "kube-controller-manager-addons-20210708230204-257783" [3f3546e8-dfaf-419b-8eeb-4e7fe07af5fc] Running
	I0708 23:05:04.184699  258367 system_pods.go:61] "kube-proxy-6dvf4" [564c5852-25e0-4d8f-8cb8-ac83fac6ee51] Running
	I0708 23:05:04.184709  258367 system_pods.go:61] "kube-scheduler-addons-20210708230204-257783" [cd710c4c-a3d0-427d-bcab-0ebd546d7cc0] Running
	I0708 23:05:04.184713  258367 system_pods.go:61] "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
	I0708 23:05:04.184725  258367 system_pods.go:61] "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 23:05:04.184734  258367 system_pods.go:61] "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
	I0708 23:05:04.184740  258367 system_pods.go:61] "snapshot-controller-989f9ddc8-dw52m" [e4c2411c-bf6b-4d75-9cb7-5d6404e63c8e] Running
	I0708 23:05:04.184745  258367 system_pods.go:61] "snapshot-controller-989f9ddc8-wplln" [7933b7c4-ea5d-4dd9-b07a-b6d744284afe] Running
	I0708 23:05:04.184751  258367 system_pods.go:61] "storage-provisioner" [b28c0bf2-6ddc-4279-a4b8-40712894afe3] Running
	I0708 23:05:04.184756  258367 system_pods.go:74] duration metric: took 10.836653435s to wait for pod list to return data ...
	I0708 23:05:04.184778  258367 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:05:04.186977  258367 default_sa.go:45] found service account: "default"
	I0708 23:05:04.186991  258367 default_sa.go:55] duration metric: took 2.203718ms for default service account to be created ...
	I0708 23:05:04.186997  258367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:05:04.194163  258367 system_pods.go:86] 18 kube-system pods found
	I0708 23:05:04.194189  258367 system_pods.go:89] "coredns-558bd4d5db-zhg8q" [fbfe6d76-09e7-4b56-8c35-638662f1daaf] Running
	I0708 23:05:04.194195  258367 system_pods.go:89] "csi-hostpath-attacher-0" [06368aac-1744-4da9-99a2-47bfbb93254b] Running
	I0708 23:05:04.194201  258367 system_pods.go:89] "csi-hostpath-provisioner-0" [c6e3d45b-6ca3-4462-b533-40a15141f9ed] Running
	I0708 23:05:04.194210  258367 system_pods.go:89] "csi-hostpath-resizer-0" [58e0abcf-e21c-4040-8d81-a9d93f509885] Running
	I0708 23:05:04.194215  258367 system_pods.go:89] "csi-hostpath-snapshotter-0" [b4319b60-be58-4608-ba01-f1a8fac1d376] Running
	I0708 23:05:04.194223  258367 system_pods.go:89] "csi-hostpathplugin-0" [d94ef600-bad7-4301-b190-602faa1f36a9] Running
	I0708 23:05:04.194228  258367 system_pods.go:89] "etcd-addons-20210708230204-257783" [cf116694-48f9-416c-a5dc-55fdf60853ea] Running
	I0708 23:05:04.194239  258367 system_pods.go:89] "kindnet-ccnc6" [b4d243ff-dec0-4629-b9b4-ae527c2c32bd] Running
	I0708 23:05:04.194244  258367 system_pods.go:89] "kube-apiserver-addons-20210708230204-257783" [f058b0e7-2349-4f21-8599-37d21a5ddcd9] Running
	I0708 23:05:04.194251  258367 system_pods.go:89] "kube-controller-manager-addons-20210708230204-257783" [3f3546e8-dfaf-419b-8eeb-4e7fe07af5fc] Running
	I0708 23:05:04.194256  258367 system_pods.go:89] "kube-proxy-6dvf4" [564c5852-25e0-4d8f-8cb8-ac83fac6ee51] Running
	I0708 23:05:04.194265  258367 system_pods.go:89] "kube-scheduler-addons-20210708230204-257783" [cd710c4c-a3d0-427d-bcab-0ebd546d7cc0] Running
	I0708 23:05:04.194270  258367 system_pods.go:89] "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
	I0708 23:05:04.194281  258367 system_pods.go:89] "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 23:05:04.194286  258367 system_pods.go:89] "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
	I0708 23:05:04.194295  258367 system_pods.go:89] "snapshot-controller-989f9ddc8-dw52m" [e4c2411c-bf6b-4d75-9cb7-5d6404e63c8e] Running
	I0708 23:05:04.194299  258367 system_pods.go:89] "snapshot-controller-989f9ddc8-wplln" [7933b7c4-ea5d-4dd9-b07a-b6d744284afe] Running
	I0708 23:05:04.194306  258367 system_pods.go:89] "storage-provisioner" [b28c0bf2-6ddc-4279-a4b8-40712894afe3] Running
	I0708 23:05:04.194311  258367 system_pods.go:126] duration metric: took 7.310147ms to wait for k8s-apps to be running ...
	I0708 23:05:04.194322  258367 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:05:04.194365  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:05:04.211032  258367 system_svc.go:56] duration metric: took 16.707896ms WaitForService to wait for kubelet.
	I0708 23:05:04.211068  258367 kubeadm.go:547] duration metric: took 1m59.15259272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:05:04.211096  258367 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:05:04.213899  258367 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:05:04.213926  258367 node_conditions.go:123] node cpu capacity is 2
	I0708 23:05:04.213937  258367 node_conditions.go:105] duration metric: took 2.836867ms to run NodePressure ...
	I0708 23:05:04.213945  258367 start.go:225] waiting for startup goroutines ...
	I0708 23:05:04.557167  258367 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:05:04.559326  258367 out.go:165] * Done! kubectl is now configured to use "addons-20210708230204-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:02:10 UTC, end at Thu 2021-07-08 23:14:09 UTC. --
	Jul 08 23:13:51 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:51.804872856Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-59b45fb494-xzc2t Namespace:ingress-nginx ID:33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e NetNS:/var/run/netns/cde73f17-b51c-4828-afba-6526ab4fc9ec Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:13:51 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:51.805032181Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Jul 08 23:13:51 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:51.955290916Z" level=info msg="Removing container: 8dbb9502aae74c385f3442d031076bc5613a11c7c086f9c1feddbab8e8e06556" id=60fa1fee-1207-471f-a039-c4b6388d217b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:13:51 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:51.980424848Z" level=info msg="Removed container 8dbb9502aae74c385f3442d031076bc5613a11c7c086f9c1feddbab8e8e06556: ingress-nginx/ingress-nginx-controller-59b45fb494-xzc2t/controller" id=60fa1fee-1207-471f-a039-c4b6388d217b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:13:52 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:52.018581295Z" level=info msg="Stopped pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=eb0fd55f-40df-4995-9656-89031a9c8ed3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:13:52 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:52.957012902Z" level=info msg="Stopping pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=f6c200d0-ccf5-45ae-b086-5b031849a62f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:13:52 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:52.957055051Z" level=info msg="Stopped pod sandbox (already stopped): 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=f6c200d0-ccf5-45ae-b086-5b031849a62f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:13:53 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:53.958916705Z" level=info msg="Stopping pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=eaa250b5-7206-497e-85a1-322cba52a3b9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:13:53 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:13:53.958957549Z" level=info msg="Stopped pod sandbox (already stopped): 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=eaa250b5-7206-497e-85a1-322cba52a3b9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.445069389Z" level=info msg="Removing container: 2ab14000c1ca7645960ac681259c868cc81a36e139da68f90e65733bb5507ff0" id=16c1a38c-9204-4519-bce4-2c1f839c102f name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.465563664Z" level=info msg="Removed container 2ab14000c1ca7645960ac681259c868cc81a36e139da68f90e65733bb5507ff0: ingress-nginx/ingress-nginx-admission-patch-pp588/patch" id=16c1a38c-9204-4519-bce4-2c1f839c102f name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.466490903Z" level=info msg="Removing container: bc89c8667db10aa5ee5cb8d59e14e6d735d87123cfbc879bad31a5c79ceb7c0d" id=9c0aacba-7a15-42e4-aabf-79cc0243b155 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.491106092Z" level=info msg="Removed container bc89c8667db10aa5ee5cb8d59e14e6d735d87123cfbc879bad31a5c79ceb7c0d: ingress-nginx/ingress-nginx-admission-create-xwrn5/create" id=9c0aacba-7a15-42e4-aabf-79cc0243b155 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.492181998Z" level=info msg="Stopping pod sandbox: 2f99d4ef7cfb047227b17da11cdb8fa3106994802d20592ef55f39dc09124a2e" id=ccbe706e-91ad-4ef7-80fb-dfdd5882161b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.492213013Z" level=info msg="Stopped pod sandbox (already stopped): 2f99d4ef7cfb047227b17da11cdb8fa3106994802d20592ef55f39dc09124a2e" id=ccbe706e-91ad-4ef7-80fb-dfdd5882161b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.492421724Z" level=info msg="Removing pod sandbox: 2f99d4ef7cfb047227b17da11cdb8fa3106994802d20592ef55f39dc09124a2e" id=ec819580-136a-47e4-9ebe-f039aafe86ce name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.520538727Z" level=info msg="Removed pod sandbox: 2f99d4ef7cfb047227b17da11cdb8fa3106994802d20592ef55f39dc09124a2e" id=ec819580-136a-47e4-9ebe-f039aafe86ce name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.520959816Z" level=info msg="Stopping pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=aeaea840-f944-4fc0-9613-a3dd5de442ae name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.520986261Z" level=info msg="Stopped pod sandbox (already stopped): 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=aeaea840-f944-4fc0-9613-a3dd5de442ae name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.521292209Z" level=info msg="Removing pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=5c3d44f5-9ea3-4d2d-8718-8d7399f02c16 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.539420759Z" level=info msg="Removed pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=5c3d44f5-9ea3-4d2d-8718-8d7399f02c16 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.541160015Z" level=info msg="Stopping pod sandbox: 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=9645e6b6-7597-4973-83a2-494ee7aca2f0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.541195124Z" level=info msg="Stopped pod sandbox (already stopped): 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=9645e6b6-7597-4973-83a2-494ee7aca2f0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.541378850Z" level=info msg="Removing pod sandbox: 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=64ec68eb-26d4-4d03-b089-2ba986cb51f3 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.562429112Z" level=info msg="Removed pod sandbox: 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=64ec68eb-26d4-4d03-b089-2ba986cb51f3 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7f205f54527c5       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                    4 minutes ago       Exited              olm-operator              6                   39716b2bcc234
	e098c5912861b       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                    4 minutes ago       Exited              catalog-operator          6                   0d57c0d81cf66
	abe6a692c11c1       docker.io/library/nginx@sha256:833dc94560d9cdb945a0b83bb02b93372ce2dcdf34f4df30fe8f5656ce5d3fb5     4 minutes ago       Running             nginx                     0                   98f760962d57f
	ebfe0e28edd0e       docker.io/library/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 minutes ago       Running             busybox                   0                   ead2d6167aad3
	bb436f463bf93       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                    10 minutes ago      Running             storage-provisioner       0                   d5ad87804e6dd
	b6a45c30ce188       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8                                    10 minutes ago      Running             coredns                   0                   d821186888bb4
	49b31069db4e9       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105                                    11 minutes ago      Running             kube-proxy                0                   93a788d293a45
	73ad782fb9631       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301                                    11 minutes ago      Running             kindnet-cni               0                   9598713e2c095
	44be06430cace       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4                                    11 minutes ago      Running             kube-scheduler            0                   a9740d6135af4
	8f4ecb2eb8a37       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630                                    11 minutes ago      Running             kube-controller-manager   0                   9266c7ca5f01a
	31b86861c38b7       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0                                    11 minutes ago      Running             kube-apiserver            0                   15a6cd929d883
	22dcce2859577       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                    11 minutes ago      Running             etcd                      0                   64b65798fba40
	
	* 
	* ==> coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210708230204-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210708230204-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=addons-20210708230204-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_02_52_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210708230204-257783
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:02:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210708230204-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:14:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:09:42 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:09:42 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:09:42 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:09:42 +0000   Thu, 08 Jul 2021 23:03:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210708230204-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                d6a6fe2c-69df-437d-be5e-65297693e451
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 coredns-558bd4d5db-zhg8q                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     11m
	  kube-system                 etcd-addons-20210708230204-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-ccnc6                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-addons-20210708230204-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-addons-20210708230204-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-6dvf4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-addons-20210708230204-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  olm                         catalog-operator-75d496484d-m4465                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         10m
	  olm                         olm-operator-859c88c96-mqphx                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                870m (43%!)(MISSING)  100m (5%!)(MISSING)
	  memory             460Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  11m (x5 over 11m)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x5 over 11m)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x4 over 11m)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet     Node addons-20210708230204-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                10m                kubelet     Node addons-20210708230204-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000490] FS-Cache: N-cookie c=0000000017e17a7f [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000786] FS-Cache: N-cookie d=0000000052778918 n=0000000001cad34c
	[  +0.000659] FS-Cache: N-key=[8] '2e75010000000000'
	[  +0.311255] FS-Cache: Duplicate cookie detected
	[  +0.000504] FS-Cache: O-cookie c=0000000014ac9dbc [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000814] FS-Cache: O-cookie d=0000000052778918 n=00000000bafd5126
	[  +0.000704] FS-Cache: O-key=[8] '2c75010000000000'
	[  +0.000510] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000812] FS-Cache: N-cookie d=0000000052778918 n=00000000edbe8e34
	[  +0.000658] FS-Cache: N-key=[8] '2c75010000000000'
	[  +0.000965] FS-Cache: Duplicate cookie detected
	[  +0.000522] FS-Cache: O-cookie c=00000000f7e9a7d0 [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000899] FS-Cache: O-cookie d=0000000052778918 n=000000008aaa8b20
	[  +0.000656] FS-Cache: O-key=[8] '2e75010000000000'
	[  +0.000483] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000799] FS-Cache: N-cookie d=0000000052778918 n=00000000d5f43b3c
	[  +0.000664] FS-Cache: N-key=[8] '2e75010000000000'
	[  +0.000960] FS-Cache: Duplicate cookie detected
	[  +0.000564] FS-Cache: O-cookie c=000000005908ab4f [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000814] FS-Cache: O-cookie d=0000000052778918 n=00000000bdc5b826
	[  +0.000669] FS-Cache: O-key=[8] '2d75010000000000'
	[  +0.000501] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000808] FS-Cache: N-cookie d=0000000052778918 n=000000005db15c82
	[  +0.000658] FS-Cache: N-key=[8] '2d75010000000000'
	[Jul 8 22:38] tee (195612): /proc/195320/oom_adj is deprecated, please use /proc/195320/oom_score_adj instead.
	
	* 
	* ==> etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] <==
	* 2021-07-08 23:10:27.017256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:10:37.016708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:10:47.016369 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:10:57.016864 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:11:07.016797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:11:17.017224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:11:27.016761 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:11:37.016832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:11:47.016714 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:11:57.017144 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:12:07.016603 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:12:17.016247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:12:27.016523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:12:37.016180 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:12:43.841784 I | mvcc: store.index: compact 1557
	2021-07-08 23:12:43.864605 I | mvcc: finished scheduled compaction at 1557 (took 22.283535ms)
	2021-07-08 23:12:47.016541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:12:57.016420 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:07.016566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:17.017065 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:27.016495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:37.016803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:47.016564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:57.016580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:07.016508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:14:09 up  1:56,  0 users,  load average: 0.12, 0.54, 1.11
	Linux addons-20210708230204-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] <==
	* W0708 23:09:02.283841       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0708 23:09:02.303318       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0708 23:09:07.665421       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0708 23:09:27.853955       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:09:27.853995       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:09:27.854003       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:09:34.057950       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0708 23:10:12.800534       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:10:12.800570       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:10:12.800577       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:10:57.166062       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:10:57.166098       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:10:57.166106       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:11:29.843826       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:11:29.843861       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:11:29.843869       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:12:09.296242       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:12:09.296279       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:12:09.296287       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:12:49.572203       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:12:49.572239       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:12:49.572247       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:13:25.064733       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:13:25.064769       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:13:25.064777       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] <==
	* E0708 23:09:11.774902       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:12.196678       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:21.928983       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:23.674003       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:24.024335       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:39.895152       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:40.120656       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:09:40.541235       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:10:07.453582       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:10:21.518450       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:10:25.776581       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:11:01.300094       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:11:10.020744       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:11:12.030598       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:11:54.492455       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:01.045932       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:10.299904       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:44.257985       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:52.790702       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:09.631759       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:23.326138       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:42.071365       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:45.522363       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-xmd7p" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0708 23:13:57.861722       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:14:03.538485       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] <==
	* I0708 23:03:09.902381       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0708 23:03:09.906684       1 server_others.go:140] Detected node IP 192.168.49.2
	W0708 23:03:09.906750       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:03:10.210383       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:03:10.210464       1 server_others.go:212] Using iptables Proxier.
	I0708 23:03:10.210493       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:03:10.210520       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:03:10.210831       1 server.go:643] Version: v1.21.2
	I0708 23:03:10.211356       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:03:10.211432       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:03:10.212794       1 config.go:315] Starting service config controller
	I0708 23:03:10.212844       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:03:10.212883       1 config.go:224] Starting endpoint slice config controller
	I0708 23:03:10.212913       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:03:10.271414       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:03:10.348217       1 shared_informer.go:247] Caches are synced for service config 
	W0708 23:03:10.386942       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:03:10.413707       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	W0708 23:09:03.367812       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] <==
	* I0708 23:02:43.963281       1 serving.go:347] Generated self-signed cert in-memory
	W0708 23:02:48.492553       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 23:02:48.492619       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 23:02:48.492651       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 23:02:48.492673       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 23:02:48.569268       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0708 23:02:48.569786       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 23:02:48.569806       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 23:02:48.569820       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0708 23:02:48.580237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:02:48.580442       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:02:48.580560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:02:48.580655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:02:48.580751       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:02:48.580849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:02:48.580946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:02:48.581068       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.581168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:02:48.581258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:02:48.587927       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.588058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.588185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:02:48.588270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:49.542237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0708 23:02:50.170585       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:02:10 UTC, end at Thu 2021-07-08 23:14:09 UTC. --
	Jul 08 23:13:48 addons-20210708230204-257783 kubelet[1413]: E0708 23:13:48.081250    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:13:48 addons-20210708230204-257783 kubelet[1413]: E0708 23:13:48.561963    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:13:48 addons-20210708230204-257783 kubelet[1413]: E0708 23:13:48.733583    1413 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-xzc2t.168ff3c7833c32a7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-xzc2t", UID:"c78b98b9-8fb9-4e5f-a0ca-6f457bf905da", APIVersion:"v1", ResourceVersion:"958", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liv
eness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210708230204-257783"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc031ff8b2b807aa7, ext:657300521502, loc:(*time.Location)(0x6caede0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc031ff8b2b807aa7, ext:657300521502, loc:(*time.Location)(0x6caede0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-xzc2t.168ff3c7833c32a7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 08 23:13:48 addons-20210708230204-257783 kubelet[1413]: E0708 23:13:48.734988    1413 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-xzc2t.168ff3c7833fe274", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-xzc2t", UID:"c78b98b9-8fb9-4e5f-a0ca-6f457bf905da", APIVersion:"v1", ResourceVersion:"958", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Rea
diness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210708230204-257783"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc031ff8b2b842a74, ext:657300763123, loc:(*time.Location)(0x6caede0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc031ff8b2b842a74, ext:657300763123, loc:(*time.Location)(0x6caede0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-xzc2t.168ff3c7833fe274" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 08 23:13:51 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:51.954528    1413 scope.go:111] "RemoveContainer" containerID="8dbb9502aae74c385f3442d031076bc5613a11c7c086f9c1feddbab8e8e06556"
	Jul 08 23:13:52 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:52.988205    1413 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpks9\" (UniqueName: \"kubernetes.io/projected/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da-kube-api-access-vpks9\") pod \"c78b98b9-8fb9-4e5f-a0ca-6f457bf905da\" (UID: \"c78b98b9-8fb9-4e5f-a0ca-6f457bf905da\") "
	Jul 08 23:13:52 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:52.988260    1413 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da-webhook-cert\") pod \"c78b98b9-8fb9-4e5f-a0ca-6f457bf905da\" (UID: \"c78b98b9-8fb9-4e5f-a0ca-6f457bf905da\") "
	Jul 08 23:13:52 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:52.991491    1413 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c78b98b9-8fb9-4e5f-a0ca-6f457bf905da" (UID: "c78b98b9-8fb9-4e5f-a0ca-6f457bf905da"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 08 23:13:52 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:52.992370    1413 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da-kube-api-access-vpks9" (OuterVolumeSpecName: "kube-api-access-vpks9") pod "c78b98b9-8fb9-4e5f-a0ca-6f457bf905da" (UID: "c78b98b9-8fb9-4e5f-a0ca-6f457bf905da"). InnerVolumeSpecName "kube-api-access-vpks9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 08 23:13:53 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:53.089388    1413 reconciler.go:319] "Volume detached for volume \"kube-api-access-vpks9\" (UniqueName: \"kubernetes.io/projected/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da-kube-api-access-vpks9\") on node \"addons-20210708230204-257783\" DevicePath \"\""
	Jul 08 23:13:53 addons-20210708230204-257783 kubelet[1413]: I0708 23:13:53.089422    1413 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da-webhook-cert\") on node \"addons-20210708230204-257783\" DevicePath \"\""
	Jul 08 23:13:58 addons-20210708230204-257783 kubelet[1413]: W0708 23:13:58.605675    1413 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Jul 08 23:13:58 addons-20210708230204-257783 kubelet[1413]: E0708 23:13:58.623393    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:13:58 addons-20210708230204-257783 kubelet[1413]: E0708 23:13:58.637485    1413 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/c78b98b9-8fb9-4e5f-a0ca-6f457bf905da/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-xzc2t"
	Jul 08 23:14:00 addons-20210708230204-257783 kubelet[1413]: I0708 23:14:00.444380    1413 scope.go:111] "RemoveContainer" containerID="2ab14000c1ca7645960ac681259c868cc81a36e139da68f90e65733bb5507ff0"
	Jul 08 23:14:00 addons-20210708230204-257783 kubelet[1413]: I0708 23:14:00.465735    1413 scope.go:111] "RemoveContainer" containerID="bc89c8667db10aa5ee5cb8d59e14e6d735d87123cfbc879bad31a5c79ceb7c0d"
	Jul 08 23:14:01 addons-20210708230204-257783 kubelet[1413]: I0708 23:14:01.080823    1413 scope.go:111] "RemoveContainer" containerID="7f205f54527c56ba1a0e6e07e18b828332512a96455e888e61a0cff526cfa606"
	Jul 08 23:14:01 addons-20210708230204-257783 kubelet[1413]: I0708 23:14:01.081032    1413 scope.go:111] "RemoveContainer" containerID="e098c5912861b6cb83d99c870f2b8cda38b831875e6440dfc1c89af8a8ea28d2"
	Jul 08 23:14:01 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:01.081183    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:14:01 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:01.081362    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:14:08 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:08.690808    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:14:08 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:08.730302    1413 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-pp588_20b93733-687b-4973-b9c9-1a5763726272: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-patch-pp588"
	Jul 08 23:14:08 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:08.731478    1413 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/20b93733-687b-4973-b9c9-1a5763726272/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-patch-pp588"
	Jul 08 23:14:08 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:08.737812    1413 cadvisor_stats_provider.go:147] "Unable to fetch pod log stats" err="open /var/log/pods/ingress-nginx_ingress-nginx-admission-create-xwrn5_0e325db5-26e4-4cdb-805a-d3f13029c19d: no such file or directory" pod="ingress-nginx/ingress-nginx-admission-create-xwrn5"
	Jul 08 23:14:08 addons-20210708230204-257783 kubelet[1413]: E0708 23:14:08.739197    1413 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/0e325db5-26e4-4cdb-805a-d3f13029c19d/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-admission-create-xwrn5"
	
	* 
	* ==> storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] <==
	* I0708 23:03:53.849841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:03:53.884827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:03:53.884984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:03:53.891142       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:03:53.891492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a!
	I0708 23:03:53.892324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32752ce7-5488-420e-924b-bb68b54fe2d8", APIVersion:"v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a became leader
	I0708 23:03:53.991866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210708230204-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210708230204-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210708230204-257783 describe pod : exit status 1 (56.504302ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210708230204-257783 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (303.49s)

                                                
                                    
x
+
TestAddons/parallel/Olm (732.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: catalog-operator stabilized in 29.900059ms
addons_test.go:480: olm-operator stabilized in 32.349596ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: failed waiting for packageserver deployment to stabilize: timed out waiting for the condition
addons_test.go:484: packageserver stabilized in 6m0.032572323s
addons_test.go:486: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:340: "catalog-operator-75d496484d-m4465" [9e760c3b-a92b-4bfe-9207-05ac187021fb] Running / Ready:ContainersNotReady (containers with unready status: [catalog-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [catalog-operator])
addons_test.go:486: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.005785598s
addons_test.go:489: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:340: "olm-operator-859c88c96-mqphx" [9003a85c-a958-402c-8dd8-812ba5acd952] Running / Ready:ContainersNotReady (containers with unready status: [olm-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [olm-operator])
addons_test.go:489: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.005869962s
addons_test.go:492: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:492: ***** TestAddons/parallel/Olm: pod "app=packageserver" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:492: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
addons_test.go:492: TestAddons/parallel/Olm: showing logs for failed pods as of 2021-07-08 23:17:14.978540494 +0000 UTC m=+964.831353443
addons_test.go:493: failed waiting for pod packageserver: app=packageserver within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Olm]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210708230204-257783
helpers_test.go:236: (dbg) docker inspect addons-20210708230204-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33",
	        "Created": "2021-07-08T23:02:08.861515915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:02:09.476547454Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/hostname",
	        "HostsPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/hosts",
	        "LogPath": "/var/lib/docker/containers/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33-json.log",
	        "Name": "/addons-20210708230204-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210708230204-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210708230204-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab16a0514720fa3890d894ed341f3c506b9d33e26a72d699f4b1c2ca0737efec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-20210708230204-257783",
	                "Source": "/var/lib/docker/volumes/addons-20210708230204-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210708230204-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210708230204-257783",
	                "name.minikube.sigs.k8s.io": "addons-20210708230204-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2a6a80abfdd90c8450743b36a071af48d5dfe35af3935906d8f359ff63e391d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49502"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e2a6a80abfdd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210708230204-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "077ecedfa7d5",
	                        "addons-20210708230204-257783"
	                    ],
	                    "NetworkID": "1f94ce698172ccdc730b6d5814ec69a10719715b26179dea78e95db77a131746",
	                    "EndpointID": "222e425dd05f5203c7200057bfd152a381a3ac67c46e9ae1bda0a98569a14d86",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
helpers_test.go:245: <<< TestAddons/parallel/Olm FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Olm]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 logs -n 25
helpers_test.go:253: TestAddons/parallel/Olm logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	| delete  | -p                                    | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	|         | download-only-20210708230110-257783   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-only-20210708230110-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:01:49 UTC | Thu, 08 Jul 2021 23:01:49 UTC |
	|         | download-only-20210708230110-257783   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-docker-20210708230149-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:02:04 UTC | Thu, 08 Jul 2021 23:02:04 UTC |
	|         | download-docker-20210708230149-257783 |                                       |         |         |                               |                               |
	| start   | -p                                    | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:02:04 UTC | Thu, 08 Jul 2021 23:05:04 UTC |
	|         | addons-20210708230204-257783          |                                       |         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |         |         |                               |                               |
	|         | --addons=registry                     |                                       |         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |         |         |                               |                               |
	|         | --addons=olm                          |                                       |         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |         |         |                               |                               |
	|         | --driver=docker                       |                                       |         |         |                               |                               |
	|         | --container-runtime=crio              |                                       |         |         |                               |                               |
	|         | --addons=ingress                      |                                       |         |         |                               |                               |
	|         | --addons=gcp-auth                     |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:05:17 UTC | Thu, 08 Jul 2021 23:05:17 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:07:59 UTC | Thu, 08 Jul 2021 23:08:00 UTC |
	|         | addons disable registry               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:08:00 UTC | Thu, 08 Jul 2021 23:08:01 UTC |
	|         | logs -n 25                            |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:08:11 UTC | Thu, 08 Jul 2021 23:08:17 UTC |
	|         | addons disable gcp-auth               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:08:53 UTC | Thu, 08 Jul 2021 23:09:00 UTC |
	|         | addons disable                        |                                       |         |         |                               |                               |
	|         | csi-hostpath-driver                   |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:09:00 UTC | Thu, 08 Jul 2021 23:09:01 UTC |
	|         | addons disable volumesnapshots        |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:09:06 UTC | Thu, 08 Jul 2021 23:09:07 UTC |
	|         | addons disable metrics-server         |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:13:39 UTC | Thu, 08 Jul 2021 23:14:08 UTC |
	|         | addons disable ingress                |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210708230204-257783          | addons-20210708230204-257783          | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:14:09 UTC | Thu, 08 Jul 2021 23:14:09 UTC |
	|         | logs -n 25                            |                                       |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:02:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:02:04.595093  258367 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:02:04.595210  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:02:04.595232  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:02:04.595242  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:02:04.595370  258367 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:02:04.595646  258367 out.go:293] Setting JSON to false
	I0708 23:02:04.596449  258367 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6273,"bootTime":1625779051,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:02:04.596515  258367 start.go:121] virtualization:  
	I0708 23:02:04.599175  258367 out.go:165] * [addons-20210708230204-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:02:04.601946  258367 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:02:04.600696  258367 notify.go:169] Checking for updates...
	I0708 23:02:04.604376  258367 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:02:04.606615  258367 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:02:04.609018  258367 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:02:04.609162  258367 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:02:04.654113  258367 docker.go:132] docker version: linux-20.10.7
	I0708 23:02:04.654191  258367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:02:04.757973  258367 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:02:04.699594566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:02:04.758088  258367 docker.go:244] overlay module found
	I0708 23:02:04.760790  258367 out.go:165] * Using the docker driver based on user configuration
	I0708 23:02:04.760807  258367 start.go:278] selected driver: docker
	I0708 23:02:04.760812  258367 start.go:751] validating driver "docker" against <nil>
	I0708 23:02:04.760826  258367 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0708 23:02:04.760863  258367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0708 23:02:04.760877  258367 out.go:230] ! Your cgroup does not allow setting memory.
	I0708 23:02:04.763505  258367 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0708 23:02:04.763788  258367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:02:04.843896  258367 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:02:04.79378416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLice
nse: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:02:04.844011  258367 start_flags.go:261] no existing cluster config was found, will generate one from the flags 
	I0708 23:02:04.844165  258367 start_flags.go:687] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0708 23:02:04.844188  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:04.844194  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:04.844202  258367 start_flags.go:270] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 23:02:04.844213  258367 start_flags.go:275] config:
	{Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:02:04.846957  258367 out.go:165] * Starting control plane node addons-20210708230204-257783 in cluster addons-20210708230204-257783
	I0708 23:02:04.846989  258367 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:02:04.849261  258367 out.go:165] * Pulling base image ...
	I0708 23:02:04.849281  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:04.849315  258367 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:02:04.849325  258367 cache.go:56] Caching tarball of preloaded images
	I0708 23:02:04.849482  258367 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:02:04.849503  258367 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:02:04.849776  258367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json ...
	I0708 23:02:04.849798  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json: {Name:mk6e320fc3a23d8bae7a0dedef336e80220bbb8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:04.849933  258367 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:02:04.882996  258367 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:02:04.883027  258367 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:02:04.883046  258367 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:02:04.883069  258367 start.go:313] acquiring machines lock for addons-20210708230204-257783: {Name:mk70de6724665814088ca786aa95a9c4f42a89ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:02:04.883169  258367 start.go:317] acquired machines lock for "addons-20210708230204-257783" in 87.3µs
	I0708 23:02:04.883190  258367 start.go:89] Provisioning new machine with config: &{Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:02:04.883252  258367 start.go:126] createHost starting for "" (driver="docker")
	I0708 23:02:04.885753  258367 out.go:192] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0708 23:02:04.885967  258367 start.go:160] libmachine.API.Create for "addons-20210708230204-257783" (driver="docker")
	I0708 23:02:04.885995  258367 client.go:168] LocalClient.Create starting
	I0708 23:02:04.886069  258367 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem
	I0708 23:02:05.051741  258367 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem
	I0708 23:02:05.311405  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0708 23:02:05.343632  258367 cli_runner.go:162] docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0708 23:02:05.343702  258367 network_create.go:255] running [docker network inspect addons-20210708230204-257783] to gather additional debugging logs...
	I0708 23:02:05.343720  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783
	W0708 23:02:05.374816  258367 cli_runner.go:162] docker network inspect addons-20210708230204-257783 returned with exit code 1
	I0708 23:02:05.374839  258367 network_create.go:258] error running [docker network inspect addons-20210708230204-257783]: docker network inspect addons-20210708230204-257783: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210708230204-257783
	I0708 23:02:05.374849  258367 network_create.go:260] output of [docker network inspect addons-20210708230204-257783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210708230204-257783
	
	** /stderr **
	I0708 23:02:05.374904  258367 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:02:05.406189  258367 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400000eff8] misses:0}
	I0708 23:02:05.406225  258367 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0708 23:02:05.406242  258367 network_create.go:106] attempt to create docker network addons-20210708230204-257783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0708 23:02:05.406286  258367 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210708230204-257783
	I0708 23:02:05.548723  258367 network_create.go:90] docker network addons-20210708230204-257783 192.168.49.0/24 created
	I0708 23:02:05.548749  258367 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210708230204-257783" container
	I0708 23:02:05.548819  258367 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0708 23:02:05.580386  258367 cli_runner.go:115] Run: docker volume create addons-20210708230204-257783 --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --label created_by.minikube.sigs.k8s.io=true
	I0708 23:02:05.612609  258367 oci.go:102] Successfully created a docker volume addons-20210708230204-257783
	I0708 23:02:05.612679  258367 cli_runner.go:115] Run: docker run --rm --name addons-20210708230204-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --entrypoint /usr/bin/test -v addons-20210708230204-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0708 23:02:08.695369  258367 cli_runner.go:168] Completed: docker run --rm --name addons-20210708230204-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --entrypoint /usr/bin/test -v addons-20210708230204-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (3.082656193s)
	I0708 23:02:08.695392  258367 oci.go:106] Successfully prepared a docker volume addons-20210708230204-257783
	W0708 23:02:08.695416  258367 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0708 23:02:08.695423  258367 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0708 23:02:08.695474  258367 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0708 23:02:08.695676  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:08.695770  258367 kic.go:179] Starting extracting preloaded images to volume ...
	I0708 23:02:08.695817  258367 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210708230204-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0708 23:02:08.824606  258367 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210708230204-257783 --name addons-20210708230204-257783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210708230204-257783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210708230204-257783 --network addons-20210708230204-257783 --ip 192.168.49.2 --volume addons-20210708230204-257783:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0708 23:02:09.490697  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Running}}
	I0708 23:02:09.549362  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:09.602057  258367 cli_runner.go:115] Run: docker exec addons-20210708230204-257783 stat /var/lib/dpkg/alternatives/iptables
	I0708 23:02:09.698879  258367 oci.go:278] the created container "addons-20210708230204-257783" has a running status.
	I0708 23:02:09.698906  258367 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa...
	I0708 23:02:10.039045  258367 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0708 23:02:10.218918  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:10.268015  258367 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0708 23:02:10.268054  258367 kic_runner.go:115] Args: [docker exec --privileged addons-20210708230204-257783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0708 23:02:19.993425  258367 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210708230204-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (11.297574532s)
	I0708 23:02:19.993449  258367 kic.go:188] duration metric: took 11.297752 seconds to extract preloaded images to volume
	I0708 23:02:19.993522  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:02:20.033201  258367 machine.go:88] provisioning docker machine ...
	I0708 23:02:20.033239  258367 ubuntu.go:169] provisioning hostname "addons-20210708230204-257783"
	I0708 23:02:20.033296  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:20.073522  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:20.073698  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:20.073711  258367 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210708230204-257783 && echo "addons-20210708230204-257783" | sudo tee /etc/hostname
	I0708 23:02:20.195296  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210708230204-257783
	
	I0708 23:02:20.195361  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:20.239296  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:20.239447  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:20.239473  258367 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210708230204-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210708230204-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210708230204-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:02:20.358452  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:02:20.358475  258367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:02:20.358493  258367 ubuntu.go:177] setting up certificates
	I0708 23:02:20.358501  258367 provision.go:83] configureAuth start
	I0708 23:02:20.358550  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:20.392386  258367 provision.go:137] copyHostCerts
	I0708 23:02:20.392450  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:02:20.392535  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:02:20.392595  258367 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:02:20.392646  258367 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.addons-20210708230204-257783 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210708230204-257783]
	I0708 23:02:21.241180  258367 provision.go:171] copyRemoteCerts
	I0708 23:02:21.241232  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:02:21.241271  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.274111  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.353415  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:02:21.367359  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:02:21.381206  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0708 23:02:21.395073  258367 provision.go:86] duration metric: configureAuth took 1.036561369s
	I0708 23:02:21.395089  258367 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:02:21.395329  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.428650  258367 main.go:130] libmachine: Using SSH client type: native
	I0708 23:02:21.428842  258367 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49502 <nil> <nil>}
	I0708 23:02:21.428857  258367 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:02:21.545263  258367 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:02:21.545312  258367 machine.go:91] provisioned docker machine in 1.512083207s
	I0708 23:02:21.545325  258367 client.go:171] LocalClient.Create took 16.659321464s
	I0708 23:02:21.545341  258367 start.go:168] duration metric: libmachine.API.Create for "addons-20210708230204-257783" took 16.659372186s
	I0708 23:02:21.545355  258367 start.go:267] post-start starting for "addons-20210708230204-257783" (driver="docker")
	I0708 23:02:21.545361  258367 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:02:21.545424  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:02:21.545473  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.578362  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.657484  258367 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:02:21.659635  258367 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:02:21.659659  258367 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:02:21.659671  258367 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:02:21.659680  258367 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:02:21.659688  258367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:02:21.659746  258367 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:02:21.659773  258367 start.go:270] post-start completed in 114.410955ms
	I0708 23:02:21.660043  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:21.693487  258367 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/config.json ...
	I0708 23:02:21.693686  258367 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:02:21.693730  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.726619  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.803192  258367 start.go:129] duration metric: createHost completed in 16.919928817s
	I0708 23:02:21.803212  258367 start.go:80] releasing machines lock for "addons-20210708230204-257783", held for 16.920036259s
	I0708 23:02:21.803282  258367 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210708230204-257783
	I0708 23:02:21.836270  258367 ssh_runner.go:149] Run: systemctl --version
	I0708 23:02:21.836314  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.836333  258367 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:02:21.836381  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:02:21.875785  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.876742  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:02:21.955373  258367 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:02:22.089407  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:02:22.097604  258367 docker.go:153] disabling docker service ...
	I0708 23:02:22.097648  258367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:02:22.106161  258367 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:02:22.113999  258367 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:02:22.187402  258367 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:02:22.276539  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:02:22.284617  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:02:22.295874  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:02:22.302331  258367 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:02:22.302355  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:02:22.309020  258367 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:02:22.314349  258367 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:02:22.319464  258367 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:02:22.399957  258367 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:02:22.574141  258367 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:02:22.574208  258367 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:02:22.576998  258367 start.go:411] Will wait 60s for crictl version
	I0708 23:02:22.577041  258367 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:02:22.602143  258367 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:02:22.602207  258367 ssh_runner.go:149] Run: crio --version
	I0708 23:02:22.668574  258367 ssh_runner.go:149] Run: crio --version
	I0708 23:02:22.736496  258367 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:02:22.736572  258367 cli_runner.go:115] Run: docker network inspect addons-20210708230204-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:02:22.768937  258367 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0708 23:02:22.771623  258367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 23:02:22.779326  258367 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:02:22.779408  258367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:02:22.833839  258367 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:02:22.833860  258367 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:02:22.833905  258367 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:02:22.855250  258367 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:02:22.855269  258367 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:02:22.855326  258367 ssh_runner.go:149] Run: crio config
	I0708 23:02:22.926289  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:22.926310  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:22.926319  258367 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:02:22.926333  258367 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210708230204-257783 NodeName:addons-20210708230204-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:02:22.926456  258367 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210708230204-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:02:22.926544  258367 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210708230204-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:02:22.926598  258367 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:02:22.932481  258367 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:02:22.932526  258367 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:02:22.937866  258367 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0708 23:02:22.948342  258367 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:02:22.958723  258367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
	I0708 23:02:22.968914  258367 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:02:22.971253  258367 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0708 23:02:22.978510  258367 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783 for IP: 192.168.49.2
	I0708 23:02:22.978544  258367 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:02:23.190106  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt ...
	I0708 23:02:23.190133  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt: {Name:mk5906ee5301ffc572d7fce2bd29e40064ac492c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.190305  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key ...
	I0708 23:02:23.190322  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key: {Name:mkb3a034c656a399e8a3b1d9af8b8f2247a84d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.190411  258367 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:02:23.411680  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt ...
	I0708 23:02:23.411701  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt: {Name:mkfacfbb209518217be8fd06056f51a62e70f58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.411817  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key ...
	I0708 23:02:23.411832  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key: {Name:mk60c06d0c1c23937aa87c9d7bc9822baf022041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.411935  258367 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key
	I0708 23:02:23.411946  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt with IP's: []
	I0708 23:02:23.821741  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt ...
	I0708 23:02:23.821759  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: {Name:mk73acb25b69f7dc2f7fe66039431368600627ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.821888  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key ...
	I0708 23:02:23.821902  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.key: {Name:mk7826adaa22e4c26a84eba8050bc8619fdb79db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:23.821984  258367 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2
	I0708 23:02:23.821992  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0708 23:02:24.106323  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 ...
	I0708 23:02:24.106346  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2: {Name:mk3bbb246fdb644e03c711331594b91b252c5977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.106500  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2 ...
	I0708 23:02:24.106515  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2: {Name:mk07bce1d3519cbcd08d7913590fefe97615f3d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.106597  258367 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt
	I0708 23:02:24.106652  258367 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key
	I0708 23:02:24.106701  258367 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key
	I0708 23:02:24.106710  258367 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt with IP's: []
	I0708 23:02:24.505496  258367 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt ...
	I0708 23:02:24.505515  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt: {Name:mk1f4245b8de4cb8f5296cbf241b13df7d0321b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.505635  258367 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key ...
	I0708 23:02:24.505649  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key: {Name:mk879c3b7560b49027151dcf6f41f1374ceeca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:02:24.505807  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:02:24.505843  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:02:24.505870  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:02:24.505903  258367 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:02:24.506929  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:02:24.521556  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0708 23:02:24.535315  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:02:24.549240  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:02:24.566238  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:02:24.579920  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:02:24.593499  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:02:24.607144  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:02:24.621166  258367 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:02:24.635126  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:02:24.645377  258367 ssh_runner.go:149] Run: openssl version
	I0708 23:02:24.649491  258367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:02:24.655325  258367 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.657850  258367 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.657899  258367 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:02:24.661967  258367 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:02:24.667605  258367 kubeadm.go:390] StartCluster: {Name:addons-20210708230204-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:addons-20210708230204-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:02:24.667676  258367 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:02:24.667724  258367 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:02:24.689946  258367 cri.go:76] found id: ""
	I0708 23:02:24.689998  258367 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:02:24.695552  258367 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0708 23:02:24.700899  258367 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0708 23:02:24.700938  258367 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0708 23:02:24.706313  258367 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0708 23:02:24.706345  258367 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0708 23:02:51.639152  258367 out.go:192]   - Generating certificates and keys ...
	I0708 23:02:51.642275  258367 out.go:192]   - Booting up control plane ...
	I0708 23:02:51.645712  258367 out.go:192]   - Configuring RBAC rules ...
	I0708 23:02:51.648194  258367 cni.go:93] Creating CNI manager for ""
	I0708 23:02:51.648207  258367 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:02:51.650911  258367 out.go:165] * Configuring CNI (Container Networking Interface) ...
	I0708 23:02:51.650974  258367 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0708 23:02:51.654228  258367 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.2/kubectl ...
	I0708 23:02:51.654241  258367 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0708 23:02:51.665340  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0708 23:02:52.178474  258367 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0708 23:02:52.178524  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.178573  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5 minikube.k8s.io/name=addons-20210708230204-257783 minikube.k8s.io/updated_at=2021_07_08T23_02_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.334017  258367 ops.go:34] apiserver oom_adj: -16
	I0708 23:02:52.334150  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:52.926162  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:53.425745  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:53.926612  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:54.426349  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:54.926224  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:55.426682  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:55.926123  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:56.426433  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:56.926151  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:57.425844  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:57.925720  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:58.425779  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:58.926601  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:59.426434  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:02:59.925788  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:00.425782  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:00.926167  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:01.425732  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:01.925980  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:02.426344  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:02.926043  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:03.425924  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:03.926241  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:04.425728  258367 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0708 23:03:04.528289  258367 kubeadm.go:985] duration metric: took 12.349811507s to wait for elevateKubeSystemPrivileges.
	I0708 23:03:04.528311  258367 kubeadm.go:392] StartCluster complete in 39.860709437s
	I0708 23:03:04.528325  258367 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:03:04.528427  258367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:03:04.528864  258367 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:03:05.058397  258367 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210708230204-257783" rescaled to 1
	I0708 23:03:05.058451  258367 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:03:05.061804  258367 out.go:165] * Verifying Kubernetes components...
	I0708 23:03:05.061859  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:03:05.058488  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:03:05.058693  258367 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0708 23:03:05.061990  258367 addons.go:59] Setting volumesnapshots=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.062007  258367 addons.go:135] Setting addon volumesnapshots=true in "addons-20210708230204-257783"
	I0708 23:03:05.062033  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.062522  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.062625  258367 addons.go:59] Setting ingress=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.062640  258367 addons.go:135] Setting addon ingress=true in "addons-20210708230204-257783"
	I0708 23:03:05.062668  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.063063  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.063653  258367 addons.go:59] Setting metrics-server=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.063680  258367 addons.go:135] Setting addon metrics-server=true in "addons-20210708230204-257783"
	I0708 23:03:05.063738  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.064183  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.064242  258367 addons.go:59] Setting olm=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.064258  258367 addons.go:135] Setting addon olm=true in "addons-20210708230204-257783"
	I0708 23:03:05.064273  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.064666  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.064716  258367 addons.go:59] Setting registry=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.064729  258367 addons.go:135] Setting addon registry=true in "addons-20210708230204-257783"
	I0708 23:03:05.064744  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.065116  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065165  258367 addons.go:59] Setting storage-provisioner=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065177  258367 addons.go:135] Setting addon storage-provisioner=true in "addons-20210708230204-257783"
	W0708 23:03:05.065182  258367 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:03:05.065199  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.065568  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065625  258367 addons.go:59] Setting default-storageclass=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065639  258367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210708230204-257783"
	I0708 23:03:05.065837  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.065881  258367 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.065904  258367 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210708230204-257783"
	I0708 23:03:05.065927  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.066288  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.066342  258367 addons.go:59] Setting gcp-auth=true in profile "addons-20210708230204-257783"
	I0708 23:03:05.098501  258367 mustload.go:65] Loading cluster: addons-20210708230204-257783
	I0708 23:03:05.098919  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.371366  258367 out.go:165]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0708 23:03:05.378321  258367 out.go:165]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0708 23:03:05.380422  258367 out.go:165]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0708 23:03:05.380489  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0708 23:03:05.380503  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0708 23:03:05.380556  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.377529  258367 addons.go:135] Setting addon default-storageclass=true in "addons-20210708230204-257783"
	W0708 23:03:05.381616  258367 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:03:05.381653  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.382135  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:05.400974  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0708 23:03:05.401045  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0708 23:03:05.401062  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0708 23:03:05.401113  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.419618  258367 out.go:165]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0708 23:03:05.419690  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0708 23:03:05.419788  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0708 23:03:05.419842  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.461104  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:05.461570  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0708 23:03:05.461617  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.465100  258367 out.go:165]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0708 23:03:05.468585  258367 out.go:165]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0708 23:03:05.487461  258367 node_ready.go:35] waiting up to 6m0s for node "addons-20210708230204-257783" to be "Ready" ...
	I0708 23:03:05.492140  258367 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:03:05.492222  258367 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:03:05.492230  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:03:05.492276  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.504847  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0708 23:03:05.515760  258367 out.go:165]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0708 23:03:05.519387  258367 out.go:165]   - Using image registry:2.7.1
	I0708 23:03:05.519516  258367 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0708 23:03:05.519538  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0708 23:03:05.519599  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.661979  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0708 23:03:05.661925  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.661958  258367 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0708 23:03:05.667742  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0708 23:03:05.667802  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.670800  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0708 23:03:05.676575  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0708 23:03:05.683758  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0708 23:03:05.688314  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0708 23:03:05.694203  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0708 23:03:05.701383  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0708 23:03:05.708144  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0708 23:03:05.714269  258367 out.go:165]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0708 23:03:05.714329  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0708 23:03:05.714337  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0708 23:03:05.714395  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.847067  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.879578  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.938388  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.943788  258367 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:03:05.943837  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:03:05.943906  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:05.963212  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.963638  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:05.968031  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.033345  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.085378  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.088288  258367 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0708 23:03:06.088302  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0708 23:03:06.112546  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:03:06.168852  258367 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0708 23:03:06.184062  258367 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0708 23:03:06.184080  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0708 23:03:06.201809  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0708 23:03:06.201825  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0708 23:03:06.221468  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0708 23:03:06.221486  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0708 23:03:06.246330  258367 addons.go:135] Setting addon gcp-auth=true in "addons-20210708230204-257783"
	I0708 23:03:06.246370  258367 host.go:66] Checking if "addons-20210708230204-257783" exists ...
	I0708 23:03:06.246846  258367 cli_runner.go:115] Run: docker container inspect addons-20210708230204-257783 --format={{.State.Status}}
	I0708 23:03:06.288342  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0708 23:03:06.290388  258367 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0708 23:03:06.290404  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0708 23:03:06.297690  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0708 23:03:06.297703  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0708 23:03:06.300511  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0708 23:03:06.300523  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0708 23:03:06.309464  258367 out.go:165]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0708 23:03:06.312084  258367 out.go:165]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0708 23:03:06.312130  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0708 23:03:06.312141  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0708 23:03:06.312185  258367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210708230204-257783
	I0708 23:03:06.311243  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0708 23:03:06.312350  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0708 23:03:06.315934  258367 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0708 23:03:06.315947  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0708 23:03:06.333850  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0708 23:03:06.333866  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0708 23:03:06.340033  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0708 23:03:06.347208  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:03:06.368086  258367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49502 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/addons-20210708230204-257783/id_rsa Username:docker}
	I0708 23:03:06.373362  258367 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 23:03:06.373381  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0708 23:03:06.407298  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0708 23:03:06.407832  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0708 23:03:06.407846  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0708 23:03:06.433603  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0708 23:03:06.433620  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0708 23:03:06.459997  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0708 23:03:06.521614  258367 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0708 23:03:06.521634  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0708 23:03:06.526576  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0708 23:03:06.526592  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0708 23:03:06.630213  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0708 23:03:06.630233  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0708 23:03:06.665127  258367 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0708 23:03:06.665146  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0708 23:03:06.730299  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0708 23:03:06.730316  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0708 23:03:06.757125  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0708 23:03:06.757143  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0708 23:03:06.802645  258367 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:06.802661  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0708 23:03:06.832185  258367 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 23:03:06.832201  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0708 23:03:06.877465  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0708 23:03:06.877483  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0708 23:03:06.959711  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0708 23:03:06.962339  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:07.007218  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0708 23:03:07.007279  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0708 23:03:07.096016  258367 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591115427s)
	I0708 23:03:07.096079  258367 start.go:730] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0708 23:03:07.129552  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0708 23:03:07.129605  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0708 23:03:07.221658  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0708 23:03:07.221716  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0708 23:03:07.325389  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0708 23:03:07.325435  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0708 23:03:07.426703  258367 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 23:03:07.426761  258367 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0708 23:03:07.442363  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.329793015s)
	I0708 23:03:07.524298  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0708 23:03:07.757844  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:08.324926  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.036545542s)
	I0708 23:03:08.324953  258367 addons.go:313] Verifying addon registry=true in "addons-20210708230204-257783"
	I0708 23:03:08.327648  258367 out.go:165] * Verifying registry addon...
	I0708 23:03:08.329284  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0708 23:03:08.430886  258367 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 23:03:08.430908  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:08.965833  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:09.475295  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:09.989922  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:10.040204  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:10.520729  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:10.760315  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.413074224s)
	I0708 23:03:10.760389  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.42033799s)
	I0708 23:03:10.760409  258367 addons.go:313] Verifying addon ingress=true in "addons-20210708230204-257783"
	I0708 23:03:10.763179  258367 out.go:165] * Verifying ingress addon...
	I0708 23:03:10.764874  258367 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0708 23:03:10.813355  258367 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 23:03:10.813398  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:10.954984  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:11.316902  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:11.437680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:11.816977  258367 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0708 23:03:11.816992  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:11.933802  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.317784  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:12.440866  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.562931  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:12.826050  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:12.960758  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:12.989544  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (6.582214242s)
	W0708 23:03:12.989579  258367 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0708 23:03:12.989597  258367 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0708 23:03:12.989682  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.529663613s)
	I0708 23:03:12.989696  258367 addons.go:313] Verifying addon metrics-server=true in "addons-20210708230204-257783"
	I0708 23:03:12.989760  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (6.029992828s)
	I0708 23:03:12.989771  258367 addons.go:313] Verifying addon gcp-auth=true in "addons-20210708230204-257783"
	I0708 23:03:12.992443  258367 out.go:165] * Verifying gcp-auth addon...
	I0708 23:03:12.994031  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0708 23:03:12.990200  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.027809025s)
	W0708 23:03:12.994192  258367 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0708 23:03:12.994206  258367 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0708 23:03:13.031412  258367 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0708 23:03:13.031426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:13.266238  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0708 23:03:13.280326  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.755950239s)
	I0708 23:03:13.280349  258367 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210708230204-257783"
	I0708 23:03:13.282866  258367 out.go:165] * Verifying csi-hostpath-driver addon...
	I0708 23:03:13.284710  258367 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0708 23:03:13.300997  258367 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0708 23:03:13.301018  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:13.323810  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:13.355033  258367 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0708 23:03:13.456204  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:13.709988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:13.828512  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:13.829141  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:13.935230  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:14.055589  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:14.305521  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:14.316503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:14.435410  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:14.537594  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:14.807762  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:14.828786  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:14.871664  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.516602534s)
	I0708 23:03:14.871693  258367 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (1.605427862s)
	I0708 23:03:14.934512  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:15.028279  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:15.033924  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:15.306879  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:15.316769  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:15.435470  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:15.541639  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:15.807294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:15.816837  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:15.933956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:16.034246  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:16.309311  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:16.321752  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:16.434021  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:16.533676  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:16.805508  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:16.815731  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:16.933818  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:17.033326  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:17.305073  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:17.316483  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:17.434168  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:17.528224  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:17.533933  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:17.804563  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:17.815690  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:17.934352  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:18.033474  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:18.306836  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:18.315822  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:18.433636  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:18.534190  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:18.878422  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:18.879963  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:18.934171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:19.033713  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:19.306039  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:19.316562  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:19.433898  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:19.533217  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:19.804900  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:19.815673  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:19.934357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:20.028418  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:20.033110  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:20.307396  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:20.315940  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:20.434357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:20.533275  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:20.805517  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:20.815912  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:20.934171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:21.033363  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:21.305599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:21.315940  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:21.434182  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:21.533779  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:21.806276  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:21.816677  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:21.934247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:22.028451  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:22.033427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:22.306106  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:22.316600  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:22.434222  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:22.534180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:22.805282  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:22.816023  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:22.934458  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:23.034254  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:23.305403  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:23.315955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:23.434280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:23.533816  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:23.805762  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:23.816193  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:23.933917  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:24.033525  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:24.305586  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:24.316060  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:24.434308  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:24.528203  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:24.533860  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:24.804747  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:24.816238  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:24.934405  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:25.033542  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:25.305381  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:25.315630  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:25.434115  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:25.533435  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:25.805258  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:25.815583  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:25.934247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:26.033468  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:26.313468  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:26.316207  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:26.434177  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:26.533378  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:26.805481  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:26.815971  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:26.933457  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:27.028244  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:27.033944  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:27.305978  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:27.316514  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:27.433572  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:27.533438  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:27.808338  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:27.815850  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:27.934256  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:28.033462  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:28.305603  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:28.316131  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:28.434164  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:28.533546  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:28.805406  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:28.815807  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.027114  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:29.028529  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:29.033307  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:29.305177  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:29.316667  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.434163  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:29.533817  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:29.804531  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:29.815964  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:29.934253  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:30.034141  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:30.304975  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:30.316394  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:30.434247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:30.533988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:30.804599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:30.815861  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:30.934212  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:31.033880  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:31.305080  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:31.316591  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:31.438379  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:31.528481  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:31.534085  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:31.805329  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:31.815830  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:31.934243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:32.033930  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:32.305955  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:32.316622  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:32.433806  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:32.534171  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:32.805098  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:32.816697  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:32.934425  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:33.033830  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:33.305586  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:33.316129  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:33.434296  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:33.533867  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:33.805664  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:33.816014  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:33.934202  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:34.027838  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:34.033689  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:34.312483  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:34.325172  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:34.433975  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:34.533845  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:34.805613  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:34.816177  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:34.934061  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:35.033908  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:35.304911  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:35.316503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:35.433830  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:35.533555  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:35.805708  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:35.816011  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:35.934894  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:36.033706  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:36.305768  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:36.316210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:36.433880  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:36.527982  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:36.533718  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:36.806281  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:36.816802  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:36.934001  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:37.033899  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:37.305715  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:37.316438  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:37.434076  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:37.534167  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:37.805363  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:37.815663  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:37.934569  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:38.033576  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:38.305386  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:38.315919  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:38.434305  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:38.528076  258367 node_ready.go:58] node "addons-20210708230204-257783" has status "Ready":"False"
	I0708 23:03:38.533725  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:38.805854  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:38.816199  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:38.933965  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:39.161933  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:39.305111  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:39.316710  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:39.433757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:39.533280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:39.805180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:39.815640  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:39.933837  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:40.033359  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:40.305820  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:40.316500  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:40.434764  258367 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0708 23:03:40.434780  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:40.529064  258367 node_ready.go:49] node "addons-20210708230204-257783" has status "Ready":"True"
	I0708 23:03:40.529082  258367 node_ready.go:38] duration metric: took 35.041601595s waiting for node "addons-20210708230204-257783" to be "Ready" ...
	I0708 23:03:40.529090  258367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:03:40.536245  258367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:40.538782  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:40.805427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:40.815950  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:40.935684  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:41.033391  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:41.305384  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:41.315751  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:41.434230  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:41.534294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:41.805062  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:41.816413  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:41.934437  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:42.033426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:42.305936  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:42.316832  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:42.452020  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:42.537291  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:42.556855  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:42.808971  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:42.837617  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:42.948502  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:43.042597  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:43.306461  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:43.316194  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:43.435340  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:43.545175  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:43.808213  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:43.816653  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:43.939445  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:44.036294  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:44.310153  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:44.316861  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:44.452869  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:44.534309  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:44.858439  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:44.859060  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:44.980373  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:45.033572  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:45.055378  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:45.307109  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:45.317405  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:45.436040  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:45.535852  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:45.813865  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:45.822168  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:45.938855  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:46.034757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:46.307667  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:46.318795  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:46.434514  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:46.534105  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:46.808438  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:46.816223  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:46.934772  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:47.047317  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:47.055451  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:47.309428  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:47.326146  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:47.435940  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:47.533825  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:47.818764  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:47.819392  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:47.939958  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:48.035140  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:48.310806  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:48.322692  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:48.435268  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:48.534774  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:48.805303  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:48.816955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:48.934285  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:49.034137  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:49.305083  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:49.317534  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:49.434132  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:49.534176  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:49.553147  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:49.806795  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:49.816930  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:49.935353  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:50.044577  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:50.308970  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:50.317116  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:50.453642  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:50.537013  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:50.809687  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:50.816605  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:50.934956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:51.034111  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:51.306357  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:51.316788  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:51.434223  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:51.533862  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:51.555405  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:51.806049  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:51.816665  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:51.935239  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:52.036764  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:52.306561  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:52.317258  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:52.435120  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:52.534680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:52.812738  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:52.823062  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:52.935048  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:53.088180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:53.313593  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:53.316210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:53.435174  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:53.534856  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:53.568302  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:53.814037  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:53.828699  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:53.935094  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:54.034051  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:54.305943  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:54.316383  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:54.434884  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:54.533560  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:54.805349  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:54.816025  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:54.935440  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:55.035068  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:55.305204  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:55.316789  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:55.434307  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:55.534194  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:55.805524  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:55.816474  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:55.938616  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:56.035071  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:56.054514  258367 pod_ready.go:102] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-08 23:03:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0708 23:03:56.306243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:56.317135  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:56.437740  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:56.534893  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:56.808396  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:56.818595  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:56.935008  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:57.033737  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:57.305195  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:57.316759  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:57.434680  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:57.534533  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:57.805918  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:57.816886  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:57.934412  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:58.034045  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:58.053423  258367 pod_ready.go:92] pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.053448  258367 pod_ready.go:81] duration metric: took 17.517183186s waiting for pod "coredns-558bd4d5db-zhg8q" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.053472  258367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.056899  258367 pod_ready.go:92] pod "etcd-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.056912  258367 pod_ready.go:81] duration metric: took 3.428532ms waiting for pod "etcd-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.056924  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.060405  258367 pod_ready.go:92] pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.060421  258367 pod_ready.go:81] duration metric: took 3.48906ms waiting for pod "kube-apiserver-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.060430  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.063897  258367 pod_ready.go:92] pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.063911  258367 pod_ready.go:81] duration metric: took 3.473676ms waiting for pod "kube-controller-manager-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.063920  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dvf4" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.067194  258367 pod_ready.go:92] pod "kube-proxy-6dvf4" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.067211  258367 pod_ready.go:81] duration metric: took 3.28452ms waiting for pod "kube-proxy-6dvf4" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.067219  258367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.305241  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:58.316828  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:58.433981  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:58.452430  258367 pod_ready.go:92] pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:03:58.452441  258367 pod_ready.go:81] duration metric: took 385.215878ms waiting for pod "kube-scheduler-addons-20210708230204-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.452450  258367 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace to be "Ready" ...
	I0708 23:03:58.534269  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:58.805326  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:58.817091  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:58.934377  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:59.034175  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:59.350171  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:59.352070  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:59.434599  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:03:59.534222  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:03:59.805415  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:03:59.815966  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:03:59.934956  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:00.034180  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:00.309488  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:00.318040  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:00.446679  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:00.535081  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:00.816881  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:00.833611  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:00.870726  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:00.940553  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:01.034681  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:01.307110  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:01.317003  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:01.434206  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:01.534362  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:01.807693  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:01.816875  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:01.934543  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:02.034256  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:02.314645  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:02.317830  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:02.434638  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:02.534555  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:02.805757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:02.816388  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:02.934273  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:03.034346  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:03.306135  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:03.316729  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:03.356699  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:03.434801  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:03.533628  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:03.806090  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:03.817066  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:03.935092  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:04.033904  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:04.308003  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:04.321774  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:04.437942  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:04.537834  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:04.811895  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:04.820598  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:04.941910  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:05.033681  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:05.306843  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:05.316309  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:05.434665  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:05.534399  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:05.808209  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:05.828690  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:05.867140  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:05.948427  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:06.034144  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:06.310817  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:06.317048  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:06.438453  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:06.538009  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:06.814561  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:06.818503  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:06.934667  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:07.034023  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:07.324320  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:07.329971  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:07.435453  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:07.534858  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:07.806398  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:07.816291  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:07.936272  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:08.045241  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:08.321615  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:08.322966  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:08.361134  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:08.444857  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:08.539896  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:08.810573  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:08.824748  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:08.950445  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:09.038119  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:09.309840  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:09.319094  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:09.451790  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:09.534821  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:09.806319  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:09.824601  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:09.948537  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:10.038714  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:10.306101  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:10.316712  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:10.444324  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:10.534107  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:10.816348  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:10.821169  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:10.880798  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:10.938689  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:11.035004  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:11.309429  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:11.316532  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:11.437001  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:11.534517  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:11.806765  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:11.817111  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:11.936678  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:12.035677  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:12.315185  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:12.319448  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:12.435665  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:12.533892  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:12.806413  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:12.816140  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:12.935226  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:13.033973  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:13.308652  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:13.318458  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:13.359912  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:13.434988  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:13.535000  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:13.807981  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:13.817954  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:13.939485  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:14.035287  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:14.310831  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:14.317343  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:14.434300  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:14.533986  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:14.806121  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:14.817043  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:14.934166  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0708 23:04:15.034426  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:15.323217  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:15.333228  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:15.361384  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:15.435068  258367 kapi.go:108] duration metric: took 1m7.105782912s to wait for kubernetes.io/minikube-addons=registry ...
	I0708 23:04:15.537812  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:15.813281  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:15.819581  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:16.049106  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:16.307540  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:16.316183  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:16.534458  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:16.806532  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:16.818477  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:17.033945  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:17.305444  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:17.315952  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:17.391245  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:17.536664  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:17.807013  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:17.817134  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:18.033576  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:18.307317  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:18.316474  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:18.534860  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:18.806288  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:18.821696  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.034372  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:19.312150  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:19.320785  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.539562  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:19.809151  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:19.819730  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:19.859154  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:20.034514  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:20.308034  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:20.318688  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:20.537429  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:20.828970  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:20.835630  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.037624  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:21.308202  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:21.317118  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.536565  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:21.835761  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:21.836860  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:21.860825  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:22.053067  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:22.322252  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:22.328107  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:22.537575  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:22.807319  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:22.817028  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:23.034601  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:23.306551  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:23.317288  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:23.534889  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:23.806323  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:23.817210  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:24.034124  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:24.305483  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:24.316211  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:24.355999  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:24.535624  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:24.807872  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:24.818209  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:25.037306  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:25.311421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:25.322366  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:25.533957  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:25.805852  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:25.816446  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:26.034421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:26.306096  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:26.316792  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:26.356696  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:26.534030  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:26.806374  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:26.817396  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:27.034314  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:27.341253  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:27.348955  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:27.544750  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:27.816243  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:27.830215  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:28.037855  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:28.314775  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:28.332322  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:28.374180  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:28.545092  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:28.806885  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:28.817006  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:29.049276  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:29.307196  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:29.317899  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:29.714913  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:29.813421  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:29.822469  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.034706  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:30.307815  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:30.317761  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.376585  258367 pod_ready.go:102] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"False"
	I0708 23:04:30.536075  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:30.819025  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:30.820506  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:30.861001  258367 pod_ready.go:92] pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace has status "Ready":"True"
	I0708 23:04:30.861017  258367 pod_ready.go:81] duration metric: took 32.408556096s waiting for pod "metrics-server-77c99ccb96-g7fdg" in "kube-system" namespace to be "Ready" ...
	I0708 23:04:30.861035  258367 pod_ready.go:38] duration metric: took 50.331926706s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:04:30.861054  258367 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:04:30.861071  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:30.861149  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:31.039139  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:31.120445  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:31.120490  258367 cri.go:76] found id: ""
	I0708 23:04:31.120510  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:31.120582  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.126428  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:31.126503  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:31.157908  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:31.157946  258367 cri.go:76] found id: ""
	I0708 23:04:31.157962  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:31.158024  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.160719  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:31.160800  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:31.195992  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:31.196010  258367 cri.go:76] found id: ""
	I0708 23:04:31.196015  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:31.196063  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.198761  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:31.198825  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:31.239007  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:31.239025  258367 cri.go:76] found id: ""
	I0708 23:04:31.239030  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:31.239073  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.241734  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:31.241798  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:31.272832  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:31.272850  258367 cri.go:76] found id: ""
	I0708 23:04:31.272856  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:31.272900  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.275666  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:31.275734  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:31.301615  258367 cri.go:76] found id: ""
	I0708 23:04:31.301628  258367 logs.go:270] 0 containers: []
	W0708 23:04:31.301634  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:31.301641  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:31.301678  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:31.311401  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:31.322172  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:31.346810  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:31.346834  258367 cri.go:76] found id: ""
	I0708 23:04:31.346840  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:31.346879  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.349712  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:31.349757  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:31.373832  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:31.373850  258367 cri.go:76] found id: ""
	I0708 23:04:31.373856  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:31.373899  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:31.376711  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:31.376736  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:31.414919  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:31.414940  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:31.472616  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:31.472645  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:31.534862  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:31.614324  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:31.614347  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:31.696068  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:31.697579  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:31.733579  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:31.733602  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:31.808651  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:31.826645  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:31.826667  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:31.830534  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:31.860747  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:31.860772  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:31.896690  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:31.896730  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:31.927741  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:31.927787  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:31.997716  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:31.997741  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:32.056732  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:32.056755  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:32.079686  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:32.360060  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:32.372384  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:32.406945  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:32.406966  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:32.447186  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:32.447204  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:32.447320  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:32.447329  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:32.447336  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:32.447342  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:32.447346  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:04:32.535609  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:32.871932  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:32.875314  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.035264  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:33.322729  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:33.325758  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.539049  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:33.809122  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:33.840811  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:34.038231  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:34.305821  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:34.316472  258367 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0708 23:04:34.536871  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:34.806324  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:34.817609  258367 kapi.go:108] duration metric: took 1m24.052734457s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0708 23:04:35.034113  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:35.305618  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:35.534039  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:35.805464  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:36.033757  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:36.305563  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:36.535022  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:36.810065  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:37.044021  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:37.306365  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:37.534507  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0708 23:04:37.806247  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:38.034711  258367 kapi.go:108] duration metric: took 1m25.040677396s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0708 23:04:38.037692  258367 out.go:165] * Your GCP credentials will now be mounted into every pod created in the addons-20210708230204-257783 cluster.
	I0708 23:04:38.039905  258367 out.go:165] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0708 23:04:38.043013  258367 out.go:165] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0708 23:04:38.305726  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:38.805442  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:39.313290  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:39.806014  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:40.305951  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:40.805935  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:41.306167  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:41.807569  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.306223  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.448401  258367 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:04:42.471087  258367 api_server.go:70] duration metric: took 1m37.412609728s to wait for apiserver process to appear ...
	I0708 23:04:42.471136  258367 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:04:42.471163  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:42.471209  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:42.498226  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:42.498240  258367 cri.go:76] found id: ""
	I0708 23:04:42.498245  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:42.498287  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.500629  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:42.500670  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:42.522032  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:42.522070  258367 cri.go:76] found id: ""
	I0708 23:04:42.522089  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:42.522123  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.524515  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:42.524555  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:42.544404  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:42.544416  258367 cri.go:76] found id: ""
	I0708 23:04:42.544421  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:42.544452  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.546783  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:42.546847  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:42.566390  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:42.566406  258367 cri.go:76] found id: ""
	I0708 23:04:42.566410  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:42.566444  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.568814  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:42.568853  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:42.589259  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:42.589295  258367 cri.go:76] found id: ""
	I0708 23:04:42.589306  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:42.589338  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.591563  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:42.591603  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:42.614367  258367 cri.go:76] found id: ""
	I0708 23:04:42.614381  258367 logs.go:270] 0 containers: []
	W0708 23:04:42.614386  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:42.614393  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:42.614447  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:42.635565  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:42.635602  258367 cri.go:76] found id: ""
	I0708 23:04:42.635617  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:42.635661  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.638113  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:42.638155  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:42.658400  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:42.658416  258367 cri.go:76] found id: ""
	I0708 23:04:42.658420  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:42.658462  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:42.660879  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:42.660896  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:42.804504  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:42.804554  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:42.813337  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:42.865302  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:42.865326  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:42.890504  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:42.890524  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:42.910953  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:42.910972  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:42.931942  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:42.931963  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:42.960735  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:42.960775  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:43.006287  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:43.006333  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:43.045345  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:43.045367  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:43.069858  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:43.069878  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:43.111980  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:43.112002  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:43.206087  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:43.206109  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:43.295091  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:43.296592  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:43.309878  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:43.337457  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:43.337472  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:43.337580  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:43.337591  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:43.337599  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:43.337608  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:43.337613  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:04:43.806280  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:44.307515  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:44.805825  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:45.305526  258367 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0708 23:04:45.805382  258367 kapi.go:108] duration metric: took 1m32.520669956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0708 23:04:45.807522  258367 out.go:165] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, volumesnapshots, olm, registry, ingress, gcp-auth, csi-hostpath-driver
	I0708 23:04:45.807541  258367 addons.go:344] enableAddons completed in 1m40.748851438s
	I0708 23:04:53.338820  258367 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0708 23:04:53.347181  258367 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0708 23:04:53.348070  258367 api_server.go:139] control plane version: v1.21.2
	I0708 23:04:53.348089  258367 api_server.go:129] duration metric: took 10.876941605s to wait for apiserver health ...
	I0708 23:04:53.348098  258367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:04:53.348115  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0708 23:04:53.348166  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0708 23:04:53.375754  258367 cri.go:76] found id: "31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:53.375767  258367 cri.go:76] found id: ""
	I0708 23:04:53.375772  258367 logs.go:270] 1 containers: [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968]
	I0708 23:04:53.375811  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.378392  258367 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0708 23:04:53.378435  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0708 23:04:53.398815  258367 cri.go:76] found id: "22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:53.398829  258367 cri.go:76] found id: ""
	I0708 23:04:53.398833  258367 logs.go:270] 1 containers: [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba]
	I0708 23:04:53.398865  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.401349  258367 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0708 23:04:53.401392  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0708 23:04:53.421390  258367 cri.go:76] found id: "b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:53.421404  258367 cri.go:76] found id: ""
	I0708 23:04:53.421409  258367 logs.go:270] 1 containers: [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891]
	I0708 23:04:53.421442  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.423799  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0708 23:04:53.423844  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0708 23:04:53.443510  258367 cri.go:76] found id: "44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:53.443526  258367 cri.go:76] found id: ""
	I0708 23:04:53.443531  258367 logs.go:270] 1 containers: [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab]
	I0708 23:04:53.443560  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.445900  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0708 23:04:53.445940  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0708 23:04:53.466255  258367 cri.go:76] found id: "49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:53.466268  258367 cri.go:76] found id: ""
	I0708 23:04:53.466273  258367 logs.go:270] 1 containers: [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27]
	I0708 23:04:53.466303  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.468712  258367 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0708 23:04:53.468766  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0708 23:04:53.488311  258367 cri.go:76] found id: ""
	I0708 23:04:53.488323  258367 logs.go:270] 0 containers: []
	W0708 23:04:53.488328  258367 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0708 23:04:53.488342  258367 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0708 23:04:53.488393  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0708 23:04:53.508339  258367 cri.go:76] found id: "bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:53.508353  258367 cri.go:76] found id: ""
	I0708 23:04:53.508357  258367 logs.go:270] 1 containers: [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc]
	I0708 23:04:53.508388  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.510777  258367 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0708 23:04:53.510819  258367 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0708 23:04:53.530634  258367 cri.go:76] found id: "8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:53.530668  258367 cri.go:76] found id: ""
	I0708 23:04:53.530682  258367 logs.go:270] 1 containers: [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194]
	I0708 23:04:53.530721  258367 ssh_runner.go:149] Run: which crictl
	I0708 23:04:53.533156  258367 logs.go:123] Gathering logs for etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] ...
	I0708 23:04:53.533169  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba"
	I0708 23:04:53.558018  258367 logs.go:123] Gathering logs for kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] ...
	I0708 23:04:53.558035  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab"
	I0708 23:04:53.581895  258367 logs.go:123] Gathering logs for kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] ...
	I0708 23:04:53.581912  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968"
	I0708 23:04:53.633038  258367 logs.go:123] Gathering logs for coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] ...
	I0708 23:04:53.633079  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891"
	I0708 23:04:53.661558  258367 logs.go:123] Gathering logs for kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] ...
	I0708 23:04:53.661578  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27"
	I0708 23:04:53.686131  258367 logs.go:123] Gathering logs for storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] ...
	I0708 23:04:53.686151  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc"
	I0708 23:04:53.711706  258367 logs.go:123] Gathering logs for kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] ...
	I0708 23:04:53.711749  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194"
	I0708 23:04:53.756467  258367 logs.go:123] Gathering logs for kubelet ...
	I0708 23:04:53.756491  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0708 23:04:53.822558  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:53.824068  258367 logs.go:138] Found kubelet problem: Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:53.869132  258367 logs.go:123] Gathering logs for dmesg ...
	I0708 23:04:53.869150  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0708 23:04:53.908521  258367 logs.go:123] Gathering logs for describe nodes ...
	I0708 23:04:53.908541  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0708 23:04:54.042355  258367 logs.go:123] Gathering logs for CRI-O ...
	I0708 23:04:54.042381  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0708 23:04:54.143221  258367 logs.go:123] Gathering logs for container status ...
	I0708 23:04:54.143246  258367 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0708 23:04:54.173768  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:54.173789  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	W0708 23:04:54.173883  258367 out.go:230] X Problems detected in kubelet:
	W0708 23:04:54.173893  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.785064    1413 reflector.go:138] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-20210708230204-257783' and this object
	W0708 23:04:54.173901  258367 out.go:230]   Jul 08 23:03:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:03:44.878500    1413 reflector.go:138] object-"olm"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210708230204-257783" cannot list resource "configmaps" in API group "" in the namespace "olm": no relationship found between node 'addons-20210708230204-257783' and this object
	I0708 23:04:54.173911  258367 out.go:299] Setting ErrFile to fd 2...
	I0708 23:04:54.173915  258367 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:05:04.184611  258367 system_pods.go:59] 18 kube-system pods found
	I0708 23:05:04.184638  258367 system_pods.go:61] "coredns-558bd4d5db-zhg8q" [fbfe6d76-09e7-4b56-8c35-638662f1daaf] Running
	I0708 23:05:04.184644  258367 system_pods.go:61] "csi-hostpath-attacher-0" [06368aac-1744-4da9-99a2-47bfbb93254b] Running
	I0708 23:05:04.184648  258367 system_pods.go:61] "csi-hostpath-provisioner-0" [c6e3d45b-6ca3-4462-b533-40a15141f9ed] Running
	I0708 23:05:04.184653  258367 system_pods.go:61] "csi-hostpath-resizer-0" [58e0abcf-e21c-4040-8d81-a9d93f509885] Running
	I0708 23:05:04.184657  258367 system_pods.go:61] "csi-hostpath-snapshotter-0" [b4319b60-be58-4608-ba01-f1a8fac1d376] Running
	I0708 23:05:04.184667  258367 system_pods.go:61] "csi-hostpathplugin-0" [d94ef600-bad7-4301-b190-602faa1f36a9] Running
	I0708 23:05:04.184672  258367 system_pods.go:61] "etcd-addons-20210708230204-257783" [cf116694-48f9-416c-a5dc-55fdf60853ea] Running
	I0708 23:05:04.184679  258367 system_pods.go:61] "kindnet-ccnc6" [b4d243ff-dec0-4629-b9b4-ae527c2c32bd] Running
	I0708 23:05:04.184684  258367 system_pods.go:61] "kube-apiserver-addons-20210708230204-257783" [f058b0e7-2349-4f21-8599-37d21a5ddcd9] Running
	I0708 23:05:04.184695  258367 system_pods.go:61] "kube-controller-manager-addons-20210708230204-257783" [3f3546e8-dfaf-419b-8eeb-4e7fe07af5fc] Running
	I0708 23:05:04.184699  258367 system_pods.go:61] "kube-proxy-6dvf4" [564c5852-25e0-4d8f-8cb8-ac83fac6ee51] Running
	I0708 23:05:04.184709  258367 system_pods.go:61] "kube-scheduler-addons-20210708230204-257783" [cd710c4c-a3d0-427d-bcab-0ebd546d7cc0] Running
	I0708 23:05:04.184713  258367 system_pods.go:61] "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
	I0708 23:05:04.184725  258367 system_pods.go:61] "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 23:05:04.184734  258367 system_pods.go:61] "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
	I0708 23:05:04.184740  258367 system_pods.go:61] "snapshot-controller-989f9ddc8-dw52m" [e4c2411c-bf6b-4d75-9cb7-5d6404e63c8e] Running
	I0708 23:05:04.184745  258367 system_pods.go:61] "snapshot-controller-989f9ddc8-wplln" [7933b7c4-ea5d-4dd9-b07a-b6d744284afe] Running
	I0708 23:05:04.184751  258367 system_pods.go:61] "storage-provisioner" [b28c0bf2-6ddc-4279-a4b8-40712894afe3] Running
	I0708 23:05:04.184756  258367 system_pods.go:74] duration metric: took 10.836653435s to wait for pod list to return data ...
	I0708 23:05:04.184778  258367 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:05:04.186977  258367 default_sa.go:45] found service account: "default"
	I0708 23:05:04.186991  258367 default_sa.go:55] duration metric: took 2.203718ms for default service account to be created ...
	I0708 23:05:04.186997  258367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:05:04.194163  258367 system_pods.go:86] 18 kube-system pods found
	I0708 23:05:04.194189  258367 system_pods.go:89] "coredns-558bd4d5db-zhg8q" [fbfe6d76-09e7-4b56-8c35-638662f1daaf] Running
	I0708 23:05:04.194195  258367 system_pods.go:89] "csi-hostpath-attacher-0" [06368aac-1744-4da9-99a2-47bfbb93254b] Running
	I0708 23:05:04.194201  258367 system_pods.go:89] "csi-hostpath-provisioner-0" [c6e3d45b-6ca3-4462-b533-40a15141f9ed] Running
	I0708 23:05:04.194210  258367 system_pods.go:89] "csi-hostpath-resizer-0" [58e0abcf-e21c-4040-8d81-a9d93f509885] Running
	I0708 23:05:04.194215  258367 system_pods.go:89] "csi-hostpath-snapshotter-0" [b4319b60-be58-4608-ba01-f1a8fac1d376] Running
	I0708 23:05:04.194223  258367 system_pods.go:89] "csi-hostpathplugin-0" [d94ef600-bad7-4301-b190-602faa1f36a9] Running
	I0708 23:05:04.194228  258367 system_pods.go:89] "etcd-addons-20210708230204-257783" [cf116694-48f9-416c-a5dc-55fdf60853ea] Running
	I0708 23:05:04.194239  258367 system_pods.go:89] "kindnet-ccnc6" [b4d243ff-dec0-4629-b9b4-ae527c2c32bd] Running
	I0708 23:05:04.194244  258367 system_pods.go:89] "kube-apiserver-addons-20210708230204-257783" [f058b0e7-2349-4f21-8599-37d21a5ddcd9] Running
	I0708 23:05:04.194251  258367 system_pods.go:89] "kube-controller-manager-addons-20210708230204-257783" [3f3546e8-dfaf-419b-8eeb-4e7fe07af5fc] Running
	I0708 23:05:04.194256  258367 system_pods.go:89] "kube-proxy-6dvf4" [564c5852-25e0-4d8f-8cb8-ac83fac6ee51] Running
	I0708 23:05:04.194265  258367 system_pods.go:89] "kube-scheduler-addons-20210708230204-257783" [cd710c4c-a3d0-427d-bcab-0ebd546d7cc0] Running
	I0708 23:05:04.194270  258367 system_pods.go:89] "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
	I0708 23:05:04.194281  258367 system_pods.go:89] "registry-proxy-fbwfb" [040628b4-50ba-4169-a8d6-b9804b46e10c] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0708 23:05:04.194286  258367 system_pods.go:89] "registry-pzwnr" [dd5ee812-d26b-4dfa-a00d-7cc3e2a97c4a] Running
	I0708 23:05:04.194295  258367 system_pods.go:89] "snapshot-controller-989f9ddc8-dw52m" [e4c2411c-bf6b-4d75-9cb7-5d6404e63c8e] Running
	I0708 23:05:04.194299  258367 system_pods.go:89] "snapshot-controller-989f9ddc8-wplln" [7933b7c4-ea5d-4dd9-b07a-b6d744284afe] Running
	I0708 23:05:04.194306  258367 system_pods.go:89] "storage-provisioner" [b28c0bf2-6ddc-4279-a4b8-40712894afe3] Running
	I0708 23:05:04.194311  258367 system_pods.go:126] duration metric: took 7.310147ms to wait for k8s-apps to be running ...
	I0708 23:05:04.194322  258367 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:05:04.194365  258367 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:05:04.211032  258367 system_svc.go:56] duration metric: took 16.707896ms WaitForService to wait for kubelet.
	I0708 23:05:04.211068  258367 kubeadm.go:547] duration metric: took 1m59.15259272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:05:04.211096  258367 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:05:04.213899  258367 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:05:04.213926  258367 node_conditions.go:123] node cpu capacity is 2
	I0708 23:05:04.213937  258367 node_conditions.go:105] duration metric: took 2.836867ms to run NodePressure ...
	I0708 23:05:04.213945  258367 start.go:225] waiting for startup goroutines ...
	I0708 23:05:04.557167  258367 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:05:04.559326  258367 out.go:165] * Done! kubectl is now configured to use "addons-20210708230204-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:02:10 UTC, end at Thu 2021-07-08 23:17:15 UTC. --
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.539420759Z" level=info msg="Removed pod sandbox: 33e8fba615e5a04b0cfa02d02f2ab5d268a81df30f06243c90a272e80a97e87e" id=5c3d44f5-9ea3-4d2d-8718-8d7399f02c16 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.541160015Z" level=info msg="Stopping pod sandbox: 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=9645e6b6-7597-4973-83a2-494ee7aca2f0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.541195124Z" level=info msg="Stopped pod sandbox (already stopped): 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=9645e6b6-7597-4973-83a2-494ee7aca2f0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.541378850Z" level=info msg="Removing pod sandbox: 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=64ec68eb-26d4-4d03-b089-2ba986cb51f3 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:14:00 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:14:00.562429112Z" level=info msg="Removed pod sandbox: 74c774f80ed8489e938ac178e6b0d897c5a7aee0c824b0fe51ab28947cd14b45" id=64ec68eb-26d4-4d03-b089-2ba986cb51f3 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.081845434Z" level=info msg="Checking image status: quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" id=53c0cc9b-2e19-4523-878b-cc5753eb91f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.082647818Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2,RepoTags:[],RepoDigests:[quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607],Size_:228537074,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=53c0cc9b-2e19-4523-878b-cc5753eb91f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.083119697Z" level=info msg="Checking image status: quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" id=38b2ba95-d151-4b7f-a1fd-7a187e5b8779 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.083950142Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2,RepoTags:[],RepoDigests:[quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607],Size_:228537074,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=38b2ba95-d151-4b7f-a1fd-7a187e5b8779 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.084677484Z" level=info msg="Creating container: olm/catalog-operator-75d496484d-m4465/catalog-operator" id=57dfcb8e-b376-442b-bece-cfd7792642a5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.173083573Z" level=info msg="Created container 74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3: olm/catalog-operator-75d496484d-m4465/catalog-operator" id=57dfcb8e-b376-442b-bece-cfd7792642a5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.173523009Z" level=info msg="Starting container: 74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3" id=d5a1d4ac-4e85-4b05-9b93-9024cbeaca19 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:15:03 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:03.184930789Z" level=info msg="Started container 74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3: olm/catalog-operator-75d496484d-m4465/catalog-operator" id=d5a1d4ac-4e85-4b05-9b93-9024cbeaca19 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:15:04 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:04.078279844Z" level=info msg="Removing container: e098c5912861b6cb83d99c870f2b8cda38b831875e6440dfc1c89af8a8ea28d2" id=8ac63ba4-9a50-423b-ae9d-70db72513cdc name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:15:04 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:04.101829121Z" level=info msg="Removed container e098c5912861b6cb83d99c870f2b8cda38b831875e6440dfc1c89af8a8ea28d2: olm/catalog-operator-75d496484d-m4465/catalog-operator" id=8ac63ba4-9a50-423b-ae9d-70db72513cdc name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.082130907Z" level=info msg="Checking image status: quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" id=ac5ed00f-2d3b-49eb-aad3-4a1f97330876 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.082908775Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2,RepoTags:[],RepoDigests:[quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607],Size_:228537074,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ac5ed00f-2d3b-49eb-aad3-4a1f97330876 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.083522050Z" level=info msg="Checking image status: quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607" id=d2cc5752-5f20-4ce7-a585-119eb9490a17 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.084284370Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2,RepoTags:[],RepoDigests:[quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607],Size_:228537074,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d2cc5752-5f20-4ce7-a585-119eb9490a17 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.085054566Z" level=info msg="Creating container: olm/olm-operator-859c88c96-mqphx/olm-operator" id=c9ba548f-30c1-474f-a8aa-672798ca1d53 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.184008555Z" level=info msg="Created container dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70: olm/olm-operator-859c88c96-mqphx/olm-operator" id=c9ba548f-30c1-474f-a8aa-672798ca1d53 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.184523322Z" level=info msg="Starting container: dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70" id=d6d60f84-4dbe-49f8-a619-e6180abef2c7 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:15:09 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:09.195800437Z" level=info msg="Started container dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70: olm/olm-operator-859c88c96-mqphx/olm-operator" id=d6d60f84-4dbe-49f8-a619-e6180abef2c7 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:15:10 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:10.092478754Z" level=info msg="Removing container: 7f205f54527c56ba1a0e6e07e18b828332512a96455e888e61a0cff526cfa606" id=34766121-624b-44a8-9846-bf655ffab3b6 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 08 23:15:10 addons-20210708230204-257783 crio[459]: time="2021-07-08 23:15:10.133231332Z" level=info msg="Removed container 7f205f54527c56ba1a0e6e07e18b828332512a96455e888e61a0cff526cfa606: olm/olm-operator-859c88c96-mqphx/olm-operator" id=34766121-624b-44a8-9846-bf655ffab3b6 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	dfa044c299d54       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                    2 minutes ago       Exited              olm-operator              7                   39716b2bcc234
	74f95ccdbbe86       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                    2 minutes ago       Exited              catalog-operator          7                   0d57c0d81cf66
	abe6a692c11c1       docker.io/library/nginx@sha256:833dc94560d9cdb945a0b83bb02b93372ce2dcdf34f4df30fe8f5656ce5d3fb5     8 minutes ago       Running             nginx                     0                   98f760962d57f
	ebfe0e28edd0e       docker.io/library/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 minutes ago       Running             busybox                   0                   ead2d6167aad3
	bb436f463bf93       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                    13 minutes ago      Running             storage-provisioner       0                   d5ad87804e6dd
	b6a45c30ce188       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8                                    13 minutes ago      Running             coredns                   0                   d821186888bb4
	49b31069db4e9       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105                                    14 minutes ago      Running             kube-proxy                0                   93a788d293a45
	73ad782fb9631       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301                                    14 minutes ago      Running             kindnet-cni               0                   9598713e2c095
	44be06430cace       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4                                    14 minutes ago      Running             kube-scheduler            0                   a9740d6135af4
	8f4ecb2eb8a37       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630                                    14 minutes ago      Running             kube-controller-manager   0                   9266c7ca5f01a
	31b86861c38b7       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0                                    14 minutes ago      Running             kube-apiserver            0                   15a6cd929d883
	22dcce2859577       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                    14 minutes ago      Running             etcd                      0                   64b65798fba40
	
	* 
	* ==> coredns [b6a45c30ce1883aed347ff34f78cd07c1038686a1789ff87b8c2cace31ca2891] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210708230204-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210708230204-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=addons-20210708230204-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_02_52_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210708230204-257783
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:02:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210708230204-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:17:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:14:45 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:14:45 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:14:45 +0000   Thu, 08 Jul 2021 23:02:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:14:45 +0000   Thu, 08 Jul 2021 23:03:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210708230204-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                d6a6fe2c-69df-437d-be5e-65297693e451
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 coredns-558bd4d5db-zhg8q                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     14m
	  kube-system                 etcd-addons-20210708230204-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-ccnc6                                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      14m
	  kube-system                 kube-apiserver-addons-20210708230204-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-addons-20210708230204-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6dvf4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-addons-20210708230204-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  olm                         catalog-operator-75d496484d-m4465                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         14m
	  olm                         olm-operator-859c88c96-mqphx                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                870m (43%!)(MISSING)  100m (5%!)(MISSING)
	  memory             460Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  14m (x5 over 14m)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x5 over 14m)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x4 over 14m)  kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet     Node addons-20210708230204-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet     Node addons-20210708230204-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                13m                kubelet     Node addons-20210708230204-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000490] FS-Cache: N-cookie c=0000000017e17a7f [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000786] FS-Cache: N-cookie d=0000000052778918 n=0000000001cad34c
	[  +0.000659] FS-Cache: N-key=[8] '2e75010000000000'
	[  +0.311255] FS-Cache: Duplicate cookie detected
	[  +0.000504] FS-Cache: O-cookie c=0000000014ac9dbc [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000814] FS-Cache: O-cookie d=0000000052778918 n=00000000bafd5126
	[  +0.000704] FS-Cache: O-key=[8] '2c75010000000000'
	[  +0.000510] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000812] FS-Cache: N-cookie d=0000000052778918 n=00000000edbe8e34
	[  +0.000658] FS-Cache: N-key=[8] '2c75010000000000'
	[  +0.000965] FS-Cache: Duplicate cookie detected
	[  +0.000522] FS-Cache: O-cookie c=00000000f7e9a7d0 [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000899] FS-Cache: O-cookie d=0000000052778918 n=000000008aaa8b20
	[  +0.000656] FS-Cache: O-key=[8] '2e75010000000000'
	[  +0.000483] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000799] FS-Cache: N-cookie d=0000000052778918 n=00000000d5f43b3c
	[  +0.000664] FS-Cache: N-key=[8] '2e75010000000000'
	[  +0.000960] FS-Cache: Duplicate cookie detected
	[  +0.000564] FS-Cache: O-cookie c=000000005908ab4f [p=000000001984cbd2 fl=226 nc=0 na=1]
	[  +0.000814] FS-Cache: O-cookie d=0000000052778918 n=00000000bdc5b826
	[  +0.000669] FS-Cache: O-key=[8] '2d75010000000000'
	[  +0.000501] FS-Cache: N-cookie c=00000000e94062d6 [p=000000001984cbd2 fl=2 nc=0 na=1]
	[  +0.000808] FS-Cache: N-cookie d=0000000052778918 n=000000005db15c82
	[  +0.000658] FS-Cache: N-key=[8] '2d75010000000000'
	[Jul 8 22:38] tee (195612): /proc/195320/oom_adj is deprecated, please use /proc/195320/oom_score_adj instead.
	
	* 
	* ==> etcd [22dcce2859577b890aacc4d405bf04b0ba9e0112b9bc734e4544890f71fae3ba] <==
	* 2021-07-08 23:13:07.016566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:17.017065 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:27.016495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:37.016803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:47.016564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:13:57.016580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:07.016508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:17.016318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:27.016339 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:37.017111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:47.016804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:14:57.017197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:15:07.019953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:15:17.016273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:15:27.016579 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:15:37.016464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:15:47.017079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:15:57.016777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:16:07.016411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:16:17.016894 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:16:27.016434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:16:37.017023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:16:47.016915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:16:57.016839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:17:07.016671 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:17:16 up  1:59,  0 users,  load average: 0.04, 0.32, 0.92
	Linux addons-20210708230204-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [31b86861c38b75e8496a4248a2a4f763ff57eda1e9c874476ce045b53a1a9968] <==
	* I0708 23:12:09.296287       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:12:49.572203       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:12:49.572239       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:12:49.572247       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:13:25.064733       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:13:25.064769       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:13:25.064777       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:14:10.028585       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:14:10.028680       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:14:10.028709       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:14:41.211127       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:14:41.211166       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:14:41.211173       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:15:23.067433       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:15:23.067471       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:15:23.067479       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:15:58.810570       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:15:58.810607       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:15:58.810615       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:16:31.863319       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:16:31.863354       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:16:31.863362       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:17:05.346004       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:17:05.346043       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:17:05.346051       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [8f4ecb2eb8a3783d3633b0332b33a7374adfa400fe9456c28b1bfcab5fbe3194] <==
	* E0708 23:11:10.020744       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:11:12.030598       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:11:54.492455       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:01.045932       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:10.299904       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:44.257985       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:12:52.790702       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:09.631759       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:23.326138       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:42.071365       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:13:45.522363       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-xmd7p" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0708 23:13:57.861722       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:14:03.538485       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0708 23:14:11.850004       1 namespace_controller.go:185] Namespace has been deleted ingress-nginx
	E0708 23:14:34.307523       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:14:35.084796       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:14:38.978234       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:15:27.801525       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:15:28.283499       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:15:32.165538       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:16:03.260971       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:16:09.566149       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:16:24.991338       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:16:42.365866       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0708 23:16:50.210754       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [49b31069db4e9e26aa0a258dc49a3fe347f19ca95b8f97a775b87281dad18d27] <==
	* I0708 23:03:09.902381       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0708 23:03:09.906684       1 server_others.go:140] Detected node IP 192.168.49.2
	W0708 23:03:09.906750       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:03:10.210383       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:03:10.210464       1 server_others.go:212] Using iptables Proxier.
	I0708 23:03:10.210493       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:03:10.210520       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:03:10.210831       1 server.go:643] Version: v1.21.2
	I0708 23:03:10.211356       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:03:10.211432       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:03:10.212794       1 config.go:315] Starting service config controller
	I0708 23:03:10.212844       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:03:10.212883       1 config.go:224] Starting endpoint slice config controller
	I0708 23:03:10.212913       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:03:10.271414       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:03:10.348217       1 shared_informer.go:247] Caches are synced for service config 
	W0708 23:03:10.386942       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:03:10.413707       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	W0708 23:09:03.367812       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [44be06430cace04ebc712681506d8dcf23e6433d42a99e67f7f698309fda22ab] <==
	* I0708 23:02:43.963281       1 serving.go:347] Generated self-signed cert in-memory
	W0708 23:02:48.492553       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0708 23:02:48.492619       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0708 23:02:48.492651       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0708 23:02:48.492673       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0708 23:02:48.569268       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0708 23:02:48.569786       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 23:02:48.569806       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0708 23:02:48.569820       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0708 23:02:48.580237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:02:48.580442       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:02:48.580560       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:02:48.580655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:02:48.580751       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:02:48.580849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:02:48.580946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:02:48.581068       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.581168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:02:48.581258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:02:48.587927       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.588058       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:48.588185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:02:48.588270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:02:49.542237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0708 23:02:50.170585       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:02:10 UTC, end at Thu 2021-07-08 23:17:16 UTC. --
	Jul 08 23:16:19 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:19.537084    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:16:20 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:20.082160    1413 scope.go:111] "RemoveContainer" containerID="dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70"
	Jul 08 23:16:20 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:20.083617    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:16:26 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:26.081154    1413 scope.go:111] "RemoveContainer" containerID="74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3"
	Jul 08 23:16:26 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:26.081488    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:16:29 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:29.594378    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:16:32 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:32.081152    1413 scope.go:111] "RemoveContainer" containerID="dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70"
	Jul 08 23:16:32 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:32.081506    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:16:35 addons-20210708230204-257783 kubelet[1413]: W0708 23:16:35.221597    1413 container.go:586] Failed to update stats for container "/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33": /sys/fs/cgroup/cpuset/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:16:39 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:39.080969    1413 scope.go:111] "RemoveContainer" containerID="74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3"
	Jul 08 23:16:39 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:39.081332    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:16:39 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:39.652044    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:16:44 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:44.081169    1413 scope.go:111] "RemoveContainer" containerID="dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70"
	Jul 08 23:16:44 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:44.081518    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:16:49 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:49.712087    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:16:52 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:52.081850    1413 scope.go:111] "RemoveContainer" containerID="74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3"
	Jul 08 23:16:52 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:52.082426    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:16:58 addons-20210708230204-257783 kubelet[1413]: I0708 23:16:58.081405    1413 scope.go:111] "RemoveContainer" containerID="dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70"
	Jul 08 23:16:58 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:58.081755    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	Jul 08 23:16:59 addons-20210708230204-257783 kubelet[1413]: E0708 23:16:59.774318    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:17:06 addons-20210708230204-257783 kubelet[1413]: I0708 23:17:06.081688    1413 scope.go:111] "RemoveContainer" containerID="74f95ccdbbe863464f5baa12fc7f82d27fa596339952501563c0f7fd767701b3"
	Jul 08 23:17:06 addons-20210708230204-257783 kubelet[1413]: E0708 23:17:06.082080    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-m4465_olm(9e760c3b-a92b-4bfe-9207-05ac187021fb)\"" pod="olm/catalog-operator-75d496484d-m4465" podUID=9e760c3b-a92b-4bfe-9207-05ac187021fb
	Jul 08 23:17:09 addons-20210708230204-257783 kubelet[1413]: E0708 23:17:09.840439    1413 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33/docker/077ecedfa7d5bc4aaab40c88d1f3b63fb0798d02789c9ae4c69cd5543f887c33\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:17:12 addons-20210708230204-257783 kubelet[1413]: I0708 23:17:12.081238    1413 scope.go:111] "RemoveContainer" containerID="dfa044c299d54fcb4ba82d5e6ab5fc06ede687fe49be2f7916f01098fbcbff70"
	Jul 08 23:17:12 addons-20210708230204-257783 kubelet[1413]: E0708 23:17:12.081600    1413 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-mqphx_olm(9003a85c-a958-402c-8dd8-812ba5acd952)\"" pod="olm/olm-operator-859c88c96-mqphx" podUID=9003a85c-a958-402c-8dd8-812ba5acd952
	
	* 
	* ==> storage-provisioner [bb436f463bf936e1f1c3c2618314b42e46cf26a94a2c10c5dfb4fb5a510cd1cc] <==
	* I0708 23:03:53.849841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:03:53.884827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:03:53.884984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:03:53.891142       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:03:53.891492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a!
	I0708 23:03:53.892324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32752ce7-5488-420e-924b-bb68b54fe2d8", APIVersion:"v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a became leader
	I0708 23:03:53.991866       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210708230204-257783_62469888-f004-4c23-ac43-994b853f756a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210708230204-257783 -n addons-20210708230204-257783
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210708230204-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Olm]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210708230204-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210708230204-257783 describe pod : exit status 1 (59.739459ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210708230204-257783 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Olm (732.49s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (14.97s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
E0708 23:36:27.635903  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:36:36.692695  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (14.968286004s)

                                                
                                                
-- stdout --
	Get:1 http://deb.debian.org/debian sid InRelease [161 kB]
	Get:2 http://deb.debian.org/debian sid/main arm64 Packages [8512 kB]
	Fetched 8673 kB in 1s (7465 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dmeventd dmsetup libaio1 libapparmor1 libbrotli1 libbsd0
	  libcurl3-gnutls libdevmapper-event1.02.1 libdevmapper1.02.1 libedit2
	  libexpat1 libglib2.0-0 libglib2.0-data libicu67 libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libmd0 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libxml2 libyajl2
	  lvm2 openssl publicsuffix shared-mime-info thin-provisioning-tools
	  xdg-user-dirs
	Suggested packages:
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
	The following NEW packages will be installed:
	  ca-certificates dmeventd dmsetup libaio1 libapparmor1 libbrotli1 libbsd0
	  libcurl3-gnutls libdevmapper-event1.02.1 libdevmapper1.02.1 libedit2
	  libexpat1 libglib2.0-0 libglib2.0-data libicu67 libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libmd0 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libvirt0 libxml2
	  libyajl2 lvm2 openssl publicsuffix shared-mime-info thin-provisioning-tools
	  xdg-user-dirs
	0 upgraded, 37 newly installed, 0 to remove and 3 not upgraded.
	Need to get 21.6 MB of archives.
	After this operation, 95.2 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian sid/main arm64 libaio1 arm64 0.3.112-9 [12.3 kB]
	Get:2 http://deb.debian.org/debian sid/main arm64 dmsetup arm64 2:1.02.175-2.1 [85.1 kB]
	Get:3 http://deb.debian.org/debian sid/main arm64 libdevmapper1.02.1 arm64 2:1.02.175-2.1 [126 kB]
	Get:4 http://deb.debian.org/debian sid/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.175-2.1 [22.4 kB]
	Get:5 http://deb.debian.org/debian sid/main arm64 libmd0 arm64 1.0.3-3 [27.9 kB]
	Get:6 http://deb.debian.org/debian sid/main arm64 libbsd0 arm64 0.11.3-1 [106 kB]
	Get:7 http://deb.debian.org/debian sid/main arm64 libedit2 arm64 3.1-20191231-2+b1 [92.1 kB]
	Get:8 http://deb.debian.org/debian sid/main arm64 liblvm2cmd2.03 arm64 2.03.11-2.1 [608 kB]
	Get:9 http://deb.debian.org/debian sid/main arm64 dmeventd arm64 2:1.02.175-2.1 [66.5 kB]
	Get:10 http://deb.debian.org/debian sid/main arm64 lvm2 arm64 2.03.11-2.1 [1086 kB]
	Get:11 http://deb.debian.org/debian sid/main arm64 openssl arm64 1.1.1k-1 [829 kB]
	Get:12 http://deb.debian.org/debian sid/main arm64 ca-certificates all 20210119 [158 kB]
	Get:13 http://deb.debian.org/debian sid/main arm64 libapparmor1 arm64 2.13.6-10 [98.5 kB]
	Get:14 http://deb.debian.org/debian sid/main arm64 libbrotli1 arm64 1.0.9-2+b2 [267 kB]
	Get:15 http://deb.debian.org/debian sid/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2.1 [69.3 kB]
	Get:16 http://deb.debian.org/debian sid/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2.1 [105 kB]
	Get:17 http://deb.debian.org/debian sid/main arm64 libldap-2.4-2 arm64 2.4.57+dfsg-3 [222 kB]
	Get:18 http://deb.debian.org/debian sid/main arm64 libnghttp2-14 arm64 1.43.0-1 [73.8 kB]
	Get:19 http://deb.debian.org/debian sid/main arm64 libpsl5 arm64 0.21.0-1.2 [57.1 kB]
	Get:20 http://deb.debian.org/debian sid/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2+b2 [59.4 kB]
	Get:21 http://deb.debian.org/debian sid/main arm64 libssh2-1 arm64 1.9.0-3 [162 kB]
	Get:22 http://deb.debian.org/debian sid/main arm64 libcurl3-gnutls arm64 7.74.0-1.3+b1 [318 kB]
	Get:23 http://deb.debian.org/debian sid/main arm64 libexpat1 arm64 2.2.10-2 [83.1 kB]
	Get:24 http://deb.debian.org/debian sid/main arm64 libglib2.0-0 arm64 2.66.8-1 [1286 kB]
	Get:25 http://deb.debian.org/debian sid/main arm64 libglib2.0-data all 2.66.8-1 [1164 kB]
	Get:26 http://deb.debian.org/debian sid/main arm64 libicu67 arm64 67.1-7 [8467 kB]
	Get:27 http://deb.debian.org/debian sid/main arm64 libldap-common all 2.4.57+dfsg-3 [95.9 kB]
	Get:28 http://deb.debian.org/debian sid/main arm64 libnl-3-200 arm64 3.4.0-1+b1 [60.6 kB]
	Get:29 http://deb.debian.org/debian sid/main arm64 libnuma1 arm64 2.0.12-1+b1 [25.8 kB]
	Get:30 http://deb.debian.org/debian sid/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2.1 [101 kB]
	Get:31 http://deb.debian.org/debian sid/main arm64 libxml2 arm64 2.9.10+dfsg-6.7 [629 kB]
	Get:32 http://deb.debian.org/debian sid/main arm64 libyajl2 arm64 2.1.0-3 [22.9 kB]
	Get:33 http://deb.debian.org/debian sid/main arm64 libvirt0 arm64 7.0.0-3 [3749 kB]
	Get:34 http://deb.debian.org/debian sid/main arm64 publicsuffix all 20210108.1309-1 [121 kB]
	Get:35 http://deb.debian.org/debian sid/main arm64 shared-mime-info arm64 2.0-1 [700 kB]
	Get:36 http://deb.debian.org/debian sid/main arm64 thin-provisioning-tools arm64 0.9.0-1 [348 kB]
	Get:37 http://deb.debian.org/debian sid/main arm64 xdg-user-dirs arm64 0.17-2 [53.2 kB]
	Fetched 21.6 MB in 0s (46.2 MB/s)
	Selecting previously unselected package libaio1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6644 files and directories currently installed.)
	Preparing to unpack .../00-libaio1_0.3.112-9_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-9) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../01-dmsetup_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking dmsetup (2:1.02.175-2.1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../02-libdevmapper1.02.1_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.175-2.1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../03-libdevmapper-event1.02.1_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.175-2.1) ...
	Selecting previously unselected package libmd0:arm64.
	Preparing to unpack .../04-libmd0_1.0.3-3_arm64.deb ...
	Unpacking libmd0:arm64 (1.0.3-3) ...
	Selecting previously unselected package libbsd0:arm64.
	Preparing to unpack .../05-libbsd0_0.11.3-1_arm64.deb ...
	Unpacking libbsd0:arm64 (0.11.3-1) ...
	Selecting previously unselected package libedit2:arm64.
	Preparing to unpack .../06-libedit2_3.1-20191231-2+b1_arm64.deb ...
	Unpacking libedit2:arm64 (3.1-20191231-2+b1) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../07-liblvm2cmd2.03_2.03.11-2.1_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.11-2.1) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../08-dmeventd_2%3a1.02.175-2.1_arm64.deb ...
	Unpacking dmeventd (2:1.02.175-2.1) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../09-lvm2_2.03.11-2.1_arm64.deb ...
	Unpacking lvm2 (2.03.11-2.1) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../10-openssl_1.1.1k-1_arm64.deb ...
	Unpacking openssl (1.1.1k-1) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../11-ca-certificates_20210119_all.deb ...
	Unpacking ca-certificates (20210119) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../12-libapparmor1_2.13.6-10_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.6-10) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../13-libbrotli1_1.0.9-2+b2_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.9-2+b2) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../14-libsasl2-modules-db_2.1.27+dfsg-2.1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2.1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../15-libsasl2-2_2.1.27+dfsg-2.1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2.1) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../16-libldap-2.4-2_2.4.57+dfsg-3_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.57+dfsg-3) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../17-libnghttp2-14_1.43.0-1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.43.0-1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../18-libpsl5_0.21.0-1.2_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1.2) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../19-librtmp1_2.4+20151223.gitfa8646d.1-2+b2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2+b2) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../20-libssh2-1_1.9.0-3_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.9.0-3) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../21-libcurl3-gnutls_7.74.0-1.3+b1_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.74.0-1.3+b1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../22-libexpat1_2.2.10-2_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.10-2) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../23-libglib2.0-0_2.66.8-1_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.66.8-1) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../24-libglib2.0-data_2.66.8-1_all.deb ...
	Unpacking libglib2.0-data (2.66.8-1) ...
	Selecting previously unselected package libicu67:arm64.
	Preparing to unpack .../25-libicu67_67.1-7_arm64.deb ...
	Unpacking libicu67:arm64 (67.1-7) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../26-libldap-common_2.4.57+dfsg-3_all.deb ...
	Unpacking libldap-common (2.4.57+dfsg-3) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../27-libnl-3-200_3.4.0-1+b1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1+b1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../28-libnuma1_2.0.12-1+b1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1+b1) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../29-libsasl2-modules_2.1.27+dfsg-2.1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2.1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../30-libxml2_2.9.10+dfsg-6.7_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-6.7) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../31-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../32-libvirt0_7.0.0-3_arm64.deb ...
	Unpacking libvirt0:arm64 (7.0.0-3) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../33-publicsuffix_20210108.1309-1_all.deb ...
	Unpacking publicsuffix (20210108.1309-1) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../34-shared-mime-info_2.0-1_arm64.deb ...
	Unpacking shared-mime-info (2.0-1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../35-thin-provisioning-tools_0.9.0-1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.9.0-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../36-xdg-user-dirs_0.17-2_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2) ...
	Setting up libexpat1:arm64 (2.2.10-2) ...
	Setting up libapparmor1:arm64 (2.13.6-10) ...
	Setting up libpsl5:arm64 (0.21.0-1.2) ...
	Setting up libicu67:arm64 (67.1-7) ...
	Setting up xdg-user-dirs (0.17-2) ...
	Setting up libglib2.0-0:arm64 (2.66.8-1) ...
	No schema files found: doing nothing.
	Setting up libbrotli1:arm64 (1.0.9-2+b2) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2.1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.43.0-1) ...
	Setting up libldap-common (2.4.57+dfsg-3) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2.1) ...
	Setting up libglib2.0-data (2.66.8-1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2+b2) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2.1) ...
	Setting up libnuma1:arm64 (2.0.12-1+b1) ...
	Setting up libmd0:arm64 (1.0.3-3) ...
	Setting up libnl-3-200:arm64 (3.4.0-1+b1) ...
	Setting up libssh2-1:arm64 (1.9.0-3) ...
	Setting up libaio1:arm64 (0.3.112-9) ...
	Setting up openssl (1.1.1k-1) ...
	Setting up libbsd0:arm64 (0.11.3-1) ...
	Setting up publicsuffix (20210108.1309-1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-6.7) ...
	Setting up libedit2:arm64 (3.1-20191231-2+b1) ...
	Setting up libldap-2.4-2:arm64 (2.4.57+dfsg-3) ...
	Setting up libcurl3-gnutls:arm64 (7.74.0-1.3+b1) ...
	Setting up ca-certificates (20210119) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/aarch64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl-base /usr/lib/aarch64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up shared-mime-info (2.0-1) ...
	Setting up thin-provisioning-tools (0.9.0-1) ...
	Setting up libvirt0:arm64 (7.0.0-3) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.11-2.1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.175-2.1) ...
	Setting up dmsetup (2:1.02.175-2.1) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.175-2.1) ...
	Setting up dmeventd (2:1.02.175-2.1) ...
	Setting up lvm2 (2.03.11-2.1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.31-12) ...
	Processing triggers for ca-certificates (20210119) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'debian:sid' locally
	sid: Pulling from library/debian
	e1aa4e0221ce: Pulling fs layer
	e1aa4e0221ce: Verifying Checksum
	e1aa4e0221ce: Download complete
	e1aa4e0221ce: Pull complete
	Digest: sha256:32d7c6d357fe6288c5ff7d0504de43de853a9cbbda2faaf3fd39b074486a8c0e
	Status: Downloaded newer image for debian:sid
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "debian:sid": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (14.97s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (13.42s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (13.421784842s)

                                                
                                                
-- stdout --
	Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
	Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
	Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
	Get:4 http://deb.debian.org/debian buster/main arm64 Packages [7735 kB]
	Get:5 http://security.debian.org/debian-security buster/updates/main arm64 Packages [288 kB]
	Get:6 http://deb.debian.org/debian buster-updates/main arm64 Packages [14.5 kB]
	Fetched 8277 kB in 1s (5954 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libxml2
	  libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libvirt0
	  libxml2 libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	0 upgraded, 45 newly installed, 0 to remove and 0 not upgraded.
	Need to get 21.7 MB of archives.
	After this operation, 66.7 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian buster/main arm64 readline-common all 7.0-5 [70.6 kB]
	Get:2 http://deb.debian.org/debian buster/main arm64 libapparmor1 arm64 2.13.2-10 [93.8 kB]
	Get:3 http://deb.debian.org/debian buster/main arm64 libdbus-1-3 arm64 1.12.20-0+deb10u1 [206 kB]
	Get:4 http://deb.debian.org/debian buster/main arm64 libexpat1 arm64 2.2.6-2+deb10u1 [85.4 kB]
	Get:5 http://deb.debian.org/debian buster/main arm64 dbus arm64 1.12.20-0+deb10u1 [227 kB]
	Get:6 http://deb.debian.org/debian buster/main arm64 krb5-locales all 1.17-3+deb10u1 [95.4 kB]
	Get:7 http://deb.debian.org/debian buster/main arm64 libssl1.1 arm64 1.1.1d-0+deb10u6 [1382 kB]
	Get:8 http://deb.debian.org/debian buster/main arm64 openssl arm64 1.1.1d-0+deb10u6 [823 kB]
	Get:9 http://deb.debian.org/debian buster/main arm64 ca-certificates all 20200601~deb10u2 [166 kB]
	Get:10 http://deb.debian.org/debian buster/main arm64 dmsetup arm64 2:1.02.155-3 [83.9 kB]
	Get:11 http://deb.debian.org/debian buster/main arm64 libdevmapper1.02.1 arm64 2:1.02.155-3 [124 kB]
	Get:12 http://deb.debian.org/debian buster/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.155-3 [21.7 kB]
	Get:13 http://deb.debian.org/debian buster/main arm64 libaio1 arm64 0.3.112-3 [11.1 kB]
	Get:14 http://deb.debian.org/debian buster/main arm64 liblvm2cmd2.03 arm64 2.03.02-3 [550 kB]
	Get:15 http://deb.debian.org/debian buster/main arm64 dmeventd arm64 2:1.02.155-3 [63.9 kB]
	Get:16 http://deb.debian.org/debian buster/main arm64 libavahi-common-data arm64 0.7-4+deb10u1 [122 kB]
	Get:17 http://deb.debian.org/debian buster/main arm64 libavahi-common3 arm64 0.7-4+deb10u1 [53.4 kB]
	Get:18 http://deb.debian.org/debian buster/main arm64 libavahi-client3 arm64 0.7-4+deb10u1 [56.9 kB]
	Get:19 http://deb.debian.org/debian buster/main arm64 libkeyutils1 arm64 1.6-6 [14.9 kB]
	Get:20 http://deb.debian.org/debian buster/main arm64 libkrb5support0 arm64 1.17-3+deb10u1 [64.9 kB]
	Get:21 http://deb.debian.org/debian buster/main arm64 libk5crypto3 arm64 1.17-3+deb10u1 [123 kB]
	Get:22 http://deb.debian.org/debian buster/main arm64 libkrb5-3 arm64 1.17-3+deb10u1 [351 kB]
	Get:23 http://deb.debian.org/debian buster/main arm64 libgssapi-krb5-2 arm64 1.17-3+deb10u1 [150 kB]
	Get:24 http://deb.debian.org/debian buster/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-1+deb10u1 [69.3 kB]
	Get:25 http://deb.debian.org/debian buster/main arm64 libsasl2-2 arm64 2.1.27+dfsg-1+deb10u1 [105 kB]
	Get:26 http://deb.debian.org/debian buster/main arm64 libldap-common all 2.4.47+dfsg-3+deb10u6 [90.0 kB]
	Get:27 http://deb.debian.org/debian buster/main arm64 libldap-2.4-2 arm64 2.4.47+dfsg-3+deb10u6 [216 kB]
	Get:28 http://deb.debian.org/debian buster/main arm64 libnghttp2-14 arm64 1.36.0-2+deb10u1 [81.9 kB]
	Get:29 http://deb.debian.org/debian buster/main arm64 libpsl5 arm64 0.20.2-2 [53.6 kB]
	Get:30 http://deb.debian.org/debian buster/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2 [55.7 kB]
	Get:31 http://deb.debian.org/debian buster/main arm64 libssh2-1 arm64 1.8.0-2.1 [135 kB]
	Get:32 http://deb.debian.org/debian buster/main arm64 libcurl3-gnutls arm64 7.64.0-4+deb10u2 [311 kB]
	Get:33 http://deb.debian.org/debian buster/main arm64 libicu63 arm64 63.1-6+deb10u1 [8151 kB]
	Get:34 http://deb.debian.org/debian buster/main arm64 libnl-3-200 arm64 3.4.0-1 [54.9 kB]
	Get:35 http://deb.debian.org/debian buster/main arm64 libnl-route-3-200 arm64 3.4.0-1 [134 kB]
	Get:36 http://deb.debian.org/debian buster/main arm64 libnuma1 arm64 2.0.12-1 [25.6 kB]
	Get:37 http://deb.debian.org/debian buster/main arm64 libreadline5 arm64 5.2+dfsg-3+b13 [113 kB]
	Get:38 http://deb.debian.org/debian buster/main arm64 libsasl2-modules arm64 2.1.27+dfsg-1+deb10u1 [102 kB]
	Get:39 http://deb.debian.org/debian buster/main arm64 libxml2 arm64 2.9.4+dfsg1-7+deb10u2 [625 kB]
	Get:40 http://deb.debian.org/debian buster/main arm64 libyajl2 arm64 2.1.0-3 [22.9 kB]
	Get:41 http://deb.debian.org/debian buster/main arm64 libvirt0 arm64 5.0.0-4+deb10u1 [4939 kB]
	Get:42 http://deb.debian.org/debian buster/main arm64 lsb-base all 10.2019051400 [28.4 kB]
	Get:43 http://deb.debian.org/debian buster/main arm64 lvm2 arm64 2.03.02-3 [1011 kB]
	Get:44 http://deb.debian.org/debian buster/main arm64 publicsuffix all 20190415.1030-1 [116 kB]
	Get:45 http://deb.debian.org/debian buster/main arm64 thin-provisioning-tools arm64 0.7.6-2.1 [318 kB]
	Fetched 21.7 MB in 0s (43.5 MB/s)
	Selecting previously unselected package readline-common.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6670 files and directories currently installed.)
	Preparing to unpack .../00-readline-common_7.0-5_all.deb ...
	Unpacking readline-common (7.0-5) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../01-libapparmor1_2.13.2-10_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.2-10) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../02-libdbus-1-3_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../03-libexpat1_2.2.6-2+deb10u1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../04-dbus_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking dbus (1.12.20-0+deb10u1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../05-krb5-locales_1.17-3+deb10u1_all.deb ...
	Unpacking krb5-locales (1.17-3+deb10u1) ...
	Selecting previously unselected package libssl1.1:arm64.
	Preparing to unpack .../06-libssl1.1_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../07-openssl_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking openssl (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../08-ca-certificates_20200601~deb10u2_all.deb ...
	Unpacking ca-certificates (20200601~deb10u2) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../09-dmsetup_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmsetup (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../10-libdevmapper1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../11-libdevmapper-event1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../12-libaio1_0.3.112-3_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-3) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../13-liblvm2cmd2.03_2.03.02-3_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../14-dmeventd_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmeventd (2:1.02.155-3) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../15-libavahi-common-data_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../16-libavahi-common3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../17-libavahi-client3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../18-libkeyutils1_1.6-6_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../19-libkrb5support0_1.17-3+deb10u1_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../20-libk5crypto3_1.17-3+deb10u1_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../21-libkrb5-3_1.17-3+deb10u1_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../22-libgssapi-krb5-2_1.17-3+deb10u1_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../23-libsasl2-modules-db_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../24-libsasl2-2_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../25-libldap-common_2.4.47+dfsg-3+deb10u6_all.deb ...
	Unpacking libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../26-libldap-2.4-2_2.4.47+dfsg-3+deb10u6_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../27-libnghttp2-14_1.36.0-2+deb10u1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../28-libpsl5_0.20.2-2_arm64.deb ...
	Unpacking libpsl5:arm64 (0.20.2-2) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../29-librtmp1_2.4+20151223.gitfa8646d.1-2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../30-libssh2-1_1.8.0-2.1_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.8.0-2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../31-libcurl3-gnutls_7.64.0-4+deb10u2_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Selecting previously unselected package libicu63:arm64.
	Preparing to unpack .../32-libicu63_63.1-6+deb10u1_arm64.deb ...
	Unpacking libicu63:arm64 (63.1-6+deb10u1) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../33-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnl-route-3-200:arm64.
	Preparing to unpack .../34-libnl-route-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-route-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../35-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../36-libreadline5_5.2+dfsg-3+b13_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../37-libsasl2-modules_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../38-libxml2_2.9.4+dfsg1-7+deb10u2_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../39-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../40-libvirt0_5.0.0-4+deb10u1_arm64.deb ...
	Unpacking libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Selecting previously unselected package lsb-base.
	Preparing to unpack .../41-lsb-base_10.2019051400_all.deb ...
	Unpacking lsb-base (10.2019051400) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../42-lvm2_2.03.02-3_arm64.deb ...
	Unpacking lvm2 (2.03.02-3) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../43-publicsuffix_20190415.1030-1_all.deb ...
	Unpacking publicsuffix (20190415.1030-1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../44-thin-provisioning-tools_0.7.6-2.1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Setting up lsb-base (10.2019051400) ...
	Setting up libkeyutils1:arm64 (1.6-6) ...
	Setting up libapparmor1:arm64 (2.13.2-10) ...
	Setting up libpsl5:arm64 (0.20.2-2) ...
	Setting up libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Setting up krb5-locales (1.17-3+deb10u1) ...
	Setting up libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Setting up libicu63:arm64 (63.1-6+deb10u1) ...
	Setting up libkrb5support0:arm64 (1.17-3+deb10u1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Setting up libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Setting up libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Setting up dbus (1.12.20-0+deb10u1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libk5crypto3:arm64 (1.17-3+deb10u1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libssh2-1:arm64 (1.8.0-2.1) ...
	Setting up libkrb5-3:arm64 (1.17-3+deb10u1) ...
	Setting up libaio1:arm64 (0.3.112-3) ...
	Setting up openssl (1.1.1d-0+deb10u6) ...
	Setting up readline-common (7.0-5) ...
	Setting up publicsuffix (20190415.1030-1) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Setting up libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Setting up libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Setting up libnl-route-3-200:arm64 (3.4.0-1) ...
	Setting up ca-certificates (20200601~deb10u2) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	137 added, 0 removed; done.
	Setting up thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-3+deb10u1) ...
	Setting up libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Setting up libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Setting up libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Setting up dmsetup (2:1.02.155-3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Setting up dmeventd (2:1.02.155-3) ...
	Setting up lvm2 (2.03.02-3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.28-10) ...
	Processing triggers for ca-certificates (20200601~deb10u2) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'debian:latest' locally
	latest: Pulling from library/debian
	310b368da982: Pulling fs layer
	310b368da982: Verifying Checksum
	310b368da982: Download complete
	310b368da982: Pull complete
	Digest: sha256:33a8231b1ec668c044b583971eea94fff37151de3a1d5a3737b08665300c8a0b
	Status: Downloaded newer image for debian:latest
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "debian:latest": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (13.42s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (9.91s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (9.901446s)

                                                
                                                
-- stdout --
	Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
	Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
	Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
	Get:4 http://security.debian.org/debian-security buster/updates/main arm64 Packages [288 kB]
	Get:5 http://deb.debian.org/debian buster/main arm64 Packages [7735 kB]
	Get:6 http://deb.debian.org/debian buster-updates/main arm64 Packages [14.5 kB]
	Fetched 8277 kB in 1s (6243 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libxml2
	  libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libavahi-client3 libavahi-common-data libavahi-common3 libcurl3-gnutls
	  libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1
	  libgssapi-krb5-2 libicu63 libk5crypto3 libkeyutils1 libkrb5-3
	  libkrb5support0 libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14
	  libnl-3-200 libnl-route-3-200 libnuma1 libpsl5 libreadline5 librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 libssl1.1 libvirt0
	  libxml2 libyajl2 lsb-base lvm2 openssl publicsuffix readline-common
	  thin-provisioning-tools
	0 upgraded, 45 newly installed, 0 to remove and 0 not upgraded.
	Need to get 21.7 MB of archives.
	After this operation, 66.7 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian buster/main arm64 readline-common all 7.0-5 [70.6 kB]
	Get:2 http://deb.debian.org/debian buster/main arm64 libapparmor1 arm64 2.13.2-10 [93.8 kB]
	Get:3 http://deb.debian.org/debian buster/main arm64 libdbus-1-3 arm64 1.12.20-0+deb10u1 [206 kB]
	Get:4 http://deb.debian.org/debian buster/main arm64 libexpat1 arm64 2.2.6-2+deb10u1 [85.4 kB]
	Get:5 http://deb.debian.org/debian buster/main arm64 dbus arm64 1.12.20-0+deb10u1 [227 kB]
	Get:6 http://deb.debian.org/debian buster/main arm64 krb5-locales all 1.17-3+deb10u1 [95.4 kB]
	Get:7 http://deb.debian.org/debian buster/main arm64 libssl1.1 arm64 1.1.1d-0+deb10u6 [1382 kB]
	Get:8 http://deb.debian.org/debian buster/main arm64 openssl arm64 1.1.1d-0+deb10u6 [823 kB]
	Get:9 http://deb.debian.org/debian buster/main arm64 ca-certificates all 20200601~deb10u2 [166 kB]
	Get:10 http://deb.debian.org/debian buster/main arm64 dmsetup arm64 2:1.02.155-3 [83.9 kB]
	Get:11 http://deb.debian.org/debian buster/main arm64 libdevmapper1.02.1 arm64 2:1.02.155-3 [124 kB]
	Get:12 http://deb.debian.org/debian buster/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.155-3 [21.7 kB]
	Get:13 http://deb.debian.org/debian buster/main arm64 libaio1 arm64 0.3.112-3 [11.1 kB]
	Get:14 http://deb.debian.org/debian buster/main arm64 liblvm2cmd2.03 arm64 2.03.02-3 [550 kB]
	Get:15 http://deb.debian.org/debian buster/main arm64 dmeventd arm64 2:1.02.155-3 [63.9 kB]
	Get:16 http://deb.debian.org/debian buster/main arm64 libavahi-common-data arm64 0.7-4+deb10u1 [122 kB]
	Get:17 http://deb.debian.org/debian buster/main arm64 libavahi-common3 arm64 0.7-4+deb10u1 [53.4 kB]
	Get:18 http://deb.debian.org/debian buster/main arm64 libavahi-client3 arm64 0.7-4+deb10u1 [56.9 kB]
	Get:19 http://deb.debian.org/debian buster/main arm64 libkeyutils1 arm64 1.6-6 [14.9 kB]
	Get:20 http://deb.debian.org/debian buster/main arm64 libkrb5support0 arm64 1.17-3+deb10u1 [64.9 kB]
	Get:21 http://deb.debian.org/debian buster/main arm64 libk5crypto3 arm64 1.17-3+deb10u1 [123 kB]
	Get:22 http://deb.debian.org/debian buster/main arm64 libkrb5-3 arm64 1.17-3+deb10u1 [351 kB]
	Get:23 http://deb.debian.org/debian buster/main arm64 libgssapi-krb5-2 arm64 1.17-3+deb10u1 [150 kB]
	Get:24 http://deb.debian.org/debian buster/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-1+deb10u1 [69.3 kB]
	Get:25 http://deb.debian.org/debian buster/main arm64 libsasl2-2 arm64 2.1.27+dfsg-1+deb10u1 [105 kB]
	Get:26 http://deb.debian.org/debian buster/main arm64 libldap-common all 2.4.47+dfsg-3+deb10u6 [90.0 kB]
	Get:27 http://deb.debian.org/debian buster/main arm64 libldap-2.4-2 arm64 2.4.47+dfsg-3+deb10u6 [216 kB]
	Get:28 http://deb.debian.org/debian buster/main arm64 libnghttp2-14 arm64 1.36.0-2+deb10u1 [81.9 kB]
	Get:29 http://deb.debian.org/debian buster/main arm64 libpsl5 arm64 0.20.2-2 [53.6 kB]
	Get:30 http://deb.debian.org/debian buster/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2 [55.7 kB]
	Get:31 http://deb.debian.org/debian buster/main arm64 libssh2-1 arm64 1.8.0-2.1 [135 kB]
	Get:32 http://deb.debian.org/debian buster/main arm64 libcurl3-gnutls arm64 7.64.0-4+deb10u2 [311 kB]
	Get:33 http://deb.debian.org/debian buster/main arm64 libicu63 arm64 63.1-6+deb10u1 [8151 kB]
	Get:34 http://deb.debian.org/debian buster/main arm64 libnl-3-200 arm64 3.4.0-1 [54.9 kB]
	Get:35 http://deb.debian.org/debian buster/main arm64 libnl-route-3-200 arm64 3.4.0-1 [134 kB]
	Get:36 http://deb.debian.org/debian buster/main arm64 libnuma1 arm64 2.0.12-1 [25.6 kB]
	Get:37 http://deb.debian.org/debian buster/main arm64 libreadline5 arm64 5.2+dfsg-3+b13 [113 kB]
	Get:38 http://deb.debian.org/debian buster/main arm64 libsasl2-modules arm64 2.1.27+dfsg-1+deb10u1 [102 kB]
	Get:39 http://deb.debian.org/debian buster/main arm64 libxml2 arm64 2.9.4+dfsg1-7+deb10u2 [625 kB]
	Get:40 http://deb.debian.org/debian buster/main arm64 libyajl2 arm64 2.1.0-3 [22.9 kB]
	Get:41 http://deb.debian.org/debian buster/main arm64 libvirt0 arm64 5.0.0-4+deb10u1 [4939 kB]
	Get:42 http://deb.debian.org/debian buster/main arm64 lsb-base all 10.2019051400 [28.4 kB]
	Get:43 http://deb.debian.org/debian buster/main arm64 lvm2 arm64 2.03.02-3 [1011 kB]
	Get:44 http://deb.debian.org/debian buster/main arm64 publicsuffix all 20190415.1030-1 [116 kB]
	Get:45 http://deb.debian.org/debian buster/main arm64 thin-provisioning-tools arm64 0.7.6-2.1 [318 kB]
	Fetched 21.7 MB in 0s (55.7 MB/s)
	Selecting previously unselected package readline-common.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6670 files and directories currently installed.)
	Preparing to unpack .../00-readline-common_7.0-5_all.deb ...
	Unpacking readline-common (7.0-5) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../01-libapparmor1_2.13.2-10_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.2-10) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../02-libdbus-1-3_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../03-libexpat1_2.2.6-2+deb10u1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../04-dbus_1.12.20-0+deb10u1_arm64.deb ...
	Unpacking dbus (1.12.20-0+deb10u1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../05-krb5-locales_1.17-3+deb10u1_all.deb ...
	Unpacking krb5-locales (1.17-3+deb10u1) ...
	Selecting previously unselected package libssl1.1:arm64.
	Preparing to unpack .../06-libssl1.1_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../07-openssl_1.1.1d-0+deb10u6_arm64.deb ...
	Unpacking openssl (1.1.1d-0+deb10u6) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../08-ca-certificates_20200601~deb10u2_all.deb ...
	Unpacking ca-certificates (20200601~deb10u2) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../09-dmsetup_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmsetup (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../10-libdevmapper1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../11-libdevmapper-event1.02.1_2%3a1.02.155-3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../12-libaio1_0.3.112-3_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-3) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../13-liblvm2cmd2.03_2.03.02-3_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../14-dmeventd_2%3a1.02.155-3_arm64.deb ...
	Unpacking dmeventd (2:1.02.155-3) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../15-libavahi-common-data_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../16-libavahi-common3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../17-libavahi-client3_0.7-4+deb10u1_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../18-libkeyutils1_1.6-6_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../19-libkrb5support0_1.17-3+deb10u1_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../20-libk5crypto3_1.17-3+deb10u1_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../21-libkrb5-3_1.17-3+deb10u1_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../22-libgssapi-krb5-2_1.17-3+deb10u1_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-3+deb10u1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../23-libsasl2-modules-db_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../24-libsasl2-2_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../25-libldap-common_2.4.47+dfsg-3+deb10u6_all.deb ...
	Unpacking libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../26-libldap-2.4-2_2.4.47+dfsg-3+deb10u6_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../27-libnghttp2-14_1.36.0-2+deb10u1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../28-libpsl5_0.20.2-2_arm64.deb ...
	Unpacking libpsl5:arm64 (0.20.2-2) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../29-librtmp1_2.4+20151223.gitfa8646d.1-2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../30-libssh2-1_1.8.0-2.1_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.8.0-2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../31-libcurl3-gnutls_7.64.0-4+deb10u2_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Selecting previously unselected package libicu63:arm64.
	Preparing to unpack .../32-libicu63_63.1-6+deb10u1_arm64.deb ...
	Unpacking libicu63:arm64 (63.1-6+deb10u1) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../33-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnl-route-3-200:arm64.
	Preparing to unpack .../34-libnl-route-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-route-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../35-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../36-libreadline5_5.2+dfsg-3+b13_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../37-libsasl2-modules_2.1.27+dfsg-1+deb10u1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../38-libxml2_2.9.4+dfsg1-7+deb10u2_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../39-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../40-libvirt0_5.0.0-4+deb10u1_arm64.deb ...
	Unpacking libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Selecting previously unselected package lsb-base.
	Preparing to unpack .../41-lsb-base_10.2019051400_all.deb ...
	Unpacking lsb-base (10.2019051400) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../42-lvm2_2.03.02-3_arm64.deb ...
	Unpacking lvm2 (2.03.02-3) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../43-publicsuffix_20190415.1030-1_all.deb ...
	Unpacking publicsuffix (20190415.1030-1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../44-thin-provisioning-tools_0.7.6-2.1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libexpat1:arm64 (2.2.6-2+deb10u1) ...
	Setting up lsb-base (10.2019051400) ...
	Setting up libkeyutils1:arm64 (1.6-6) ...
	Setting up libapparmor1:arm64 (2.13.2-10) ...
	Setting up libpsl5:arm64 (0.20.2-2) ...
	Setting up libssl1.1:arm64 (1.1.1d-0+deb10u6) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.36.0-2+deb10u1) ...
	Setting up krb5-locales (1.17-3+deb10u1) ...
	Setting up libldap-common (2.4.47+dfsg-3+deb10u6) ...
	Setting up libicu63:arm64 (63.1-6+deb10u1) ...
	Setting up libkrb5support0:arm64 (1.17-3+deb10u1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2) ...
	Setting up libavahi-common-data:arm64 (0.7-4+deb10u1) ...
	Setting up libdbus-1-3:arm64 (1.12.20-0+deb10u1) ...
	Setting up dbus (1.12.20-0+deb10u1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libk5crypto3:arm64 (1.17-3+deb10u1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-1+deb10u1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libssh2-1:arm64 (1.8.0-2.1) ...
	Setting up libkrb5-3:arm64 (1.17-3+deb10u1) ...
	Setting up libaio1:arm64 (0.3.112-3) ...
	Setting up openssl (1.1.1d-0+deb10u6) ...
	Setting up readline-common (7.0-5) ...
	Setting up publicsuffix (20190415.1030-1) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-7+deb10u2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3+b13) ...
	Setting up libavahi-common3:arm64 (0.7-4+deb10u1) ...
	Setting up libldap-2.4-2:arm64 (2.4.47+dfsg-3+deb10u6) ...
	Setting up libnl-route-3-200:arm64 (3.4.0-1) ...
	Setting up ca-certificates (20200601~deb10u2) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.28.1 /usr/local/share/perl/5.28.1 /usr/lib/aarch64-linux-gnu/perl5/5.28 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.28 /usr/share/perl/5.28 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	137 added, 0 removed; done.
	Setting up thin-provisioning-tools (0.7.6-2.1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-3+deb10u1) ...
	Setting up libavahi-client3:arm64 (0.7-4+deb10u1) ...
	Setting up libcurl3-gnutls:arm64 (7.64.0-4+deb10u2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.155-3) ...
	Setting up libvirt0:arm64 (5.0.0-4+deb10u1) ...
	Setting up dmsetup (2:1.02.155-3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.155-3) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.02-3) ...
	Setting up dmeventd (2:1.02.155-3) ...
	Setting up lvm2 (2.03.02-3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.28-10) ...
	Processing triggers for ca-certificates (20200601~deb10u2) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'debian:10' locally
	10: Pulling from library/debian
	Digest: sha256:33a8231b1ec668c044b583971eea94fff37151de3a1d5a3737b08665300c8a0b
	Status: Downloaded newer image for debian:10
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "debian:10": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (9.91s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (11.91s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (11.908151218s)

                                                
                                                
-- stdout --
	Ign:1 http://deb.debian.org/debian stretch InRelease
	Get:2 http://security.debian.org/debian-security stretch/updates InRelease [53.0 kB]
	Get:3 http://deb.debian.org/debian stretch-updates InRelease [93.6 kB]
	Get:4 http://deb.debian.org/debian stretch Release [118 kB]
	Get:5 http://deb.debian.org/debian stretch Release.gpg [2410 B]
	Get:6 http://security.debian.org/debian-security stretch/updates/main arm64 Packages [678 kB]
	Get:7 http://deb.debian.org/debian stretch/main arm64 Packages [6921 kB]
	Fetched 7866 kB in 1s (5893 kB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  dbus dmeventd dmsetup libapparmor1 libavahi-client3 libavahi-common-data
	  libavahi-common3 libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libfdt1 libffi6 libgmp10 libgnutls30 libhogweed4 libicu57
	  liblvm2app2.2 liblvm2cmd2.02 libnl-3-200 libnl-route-3-200 libnuma1
	  libp11-kit0 libreadline5 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libssh2-1 libssl1.1 libtasn1-6 libxen-4.8 libxenstore3.0 libxml2 libyajl2
	  lvm2 readline-common sgml-base xml-core
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus gnutls-bin
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
	  thin-provisioning-tools readline-doc sgml-base-doc debhelper
	The following NEW packages will be installed:
	  dbus dmeventd dmsetup libapparmor1 libavahi-client3 libavahi-common-data
	  libavahi-common3 libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libfdt1 libffi6 libgmp10 libgnutls30 libhogweed4 libicu57
	  liblvm2app2.2 liblvm2cmd2.02 libnl-3-200 libnl-route-3-200 libnuma1
	  libp11-kit0 libreadline5 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libssh2-1 libssl1.1 libtasn1-6 libvirt0 libxen-4.8 libxenstore3.0 libxml2
	  libyajl2 lvm2 readline-common sgml-base xml-core
	0 upgraded, 39 newly installed, 0 to remove and 1 not upgraded.
	Need to get 18.7 MB of archives.
	After this operation, 57.6 MB of additional disk space will be used.
	Get:1 http://deb.debian.org/debian stretch/main arm64 sgml-base all 1.29 [14.8 kB]
	Get:2 http://security.debian.org/debian-security stretch/updates/main arm64 libssl1.1 arm64 1.1.0l-1~deb9u3 [1125 kB]
	Get:3 http://deb.debian.org/debian stretch/main arm64 readline-common all 7.0-3 [70.4 kB]
	Get:4 http://deb.debian.org/debian stretch/main arm64 libapparmor1 arm64 2.11.0-3+deb9u2 [75.7 kB]
	Get:5 http://deb.debian.org/debian stretch/main arm64 libdbus-1-3 arm64 1.10.32-0+deb9u1 [172 kB]
	Get:6 http://deb.debian.org/debian stretch/main arm64 libexpat1 arm64 2.2.0-2+deb9u3 [70.9 kB]
	Get:7 http://deb.debian.org/debian stretch/main arm64 dbus arm64 1.10.32-0+deb9u1 [194 kB]
	Get:8 http://deb.debian.org/debian stretch/main arm64 libgmp10 arm64 2:6.1.2+dfsg-1 [213 kB]
	Get:9 http://deb.debian.org/debian stretch/main arm64 libhogweed4 arm64 3.3-1+b2 [128 kB]
	Get:10 http://deb.debian.org/debian stretch/main arm64 libffi6 arm64 3.2.1-6 [19.0 kB]
	Get:11 http://deb.debian.org/debian stretch/main arm64 libtasn1-6 arm64 4.10-1.1+deb9u1 [45.7 kB]
	Get:12 http://deb.debian.org/debian stretch/main arm64 libgnutls30 arm64 3.5.8-5+deb9u5 [784 kB]
	Get:13 http://deb.debian.org/debian stretch/main arm64 libsasl2-modules-db arm64 2.1.27~101-g0780600+dfsg-3+deb9u1 [66.8 kB]
	Get:14 http://deb.debian.org/debian stretch/main arm64 libsasl2-2 arm64 2.1.27~101-g0780600+dfsg-3+deb9u1 [97.8 kB]
	Get:15 http://deb.debian.org/debian stretch/main arm64 libicu57 arm64 57.1-6+deb9u4 [7553 kB]
	Get:16 http://security.debian.org/debian-security stretch/updates/main arm64 libp11-kit0 arm64 0.23.3-2+deb9u1 [91.4 kB]
	Get:17 http://security.debian.org/debian-security stretch/updates/main arm64 libxml2 arm64 2.9.4+dfsg1-2.2+deb9u5 [790 kB]
	Get:18 http://deb.debian.org/debian stretch/main arm64 dmsetup arm64 2:1.02.137-2 [100 kB]
	Get:19 http://deb.debian.org/debian stretch/main arm64 libdevmapper1.02.1 arm64 2:1.02.137-2 [143 kB]
	Get:20 http://deb.debian.org/debian stretch/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.137-2 [40.1 kB]
	Get:21 http://deb.debian.org/debian stretch/main arm64 liblvm2cmd2.02 arm64 2.02.168-2 [566 kB]
	Get:22 http://deb.debian.org/debian stretch/main arm64 dmeventd arm64 2:1.02.137-2 [56.5 kB]
	Get:23 http://deb.debian.org/debian stretch/main arm64 libavahi-common-data arm64 0.6.32-2 [118 kB]
	Get:24 http://deb.debian.org/debian stretch/main arm64 libavahi-common3 arm64 0.6.32-2 [48.4 kB]
	Get:25 http://deb.debian.org/debian stretch/main arm64 libavahi-client3 arm64 0.6.32-2 [51.2 kB]
	Get:26 http://deb.debian.org/debian stretch/main arm64 liblvm2app2.2 arm64 2.02.168-2 [458 kB]
	Get:27 http://deb.debian.org/debian stretch/main arm64 libnl-3-200 arm64 3.2.27-2 [52.5 kB]
	Get:28 http://deb.debian.org/debian stretch/main arm64 libnl-route-3-200 arm64 3.2.27-2 [111 kB]
	Get:29 http://deb.debian.org/debian stretch/main arm64 libnuma1 arm64 2.0.11-2.1 [30.1 kB]
	Get:30 http://deb.debian.org/debian stretch/main arm64 libreadline5 arm64 5.2+dfsg-3+b1 [101 kB]
	Get:31 http://deb.debian.org/debian stretch/main arm64 libsasl2-modules arm64 2.1.27~101-g0780600+dfsg-3+deb9u1 [94.8 kB]
	Get:32 http://deb.debian.org/debian stretch/main arm64 libssh2-1 arm64 1.7.0-1+deb9u1 [127 kB]
	Get:33 http://security.debian.org/debian-security stretch/updates/main arm64 libvirt0 arm64 3.0.0-4+deb9u5 [3913 kB]
	Get:34 http://deb.debian.org/debian stretch/main arm64 libfdt1 arm64 1.4.2-1 [12.8 kB]
	Get:35 http://deb.debian.org/debian stretch/main arm64 libxenstore3.0 arm64 4.8.5.final+shim4.10.4-1+deb9u12 [33.8 kB]
	Get:36 http://deb.debian.org/debian stretch/main arm64 libyajl2 arm64 2.1.0-2+b3 [20.7 kB]
	Get:37 http://deb.debian.org/debian stretch/main arm64 libxen-4.8 arm64 4.8.5.final+shim4.10.4-1+deb9u12 [298 kB]
	Get:38 http://deb.debian.org/debian stretch/main arm64 lvm2 arm64 2.02.168-2 [813 kB]
	Get:39 http://deb.debian.org/debian stretch/main arm64 xml-core all 0.17 [23.2 kB]
	Fetched 18.7 MB in 0s (46.0 MB/s)
	Selecting previously unselected package sgml-base.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6495 files and directories currently installed.)
	Preparing to unpack .../00-sgml-base_1.29_all.deb ...
	Unpacking sgml-base (1.29) ...
	Selecting previously unselected package libssl1.1:arm64.
	Preparing to unpack .../01-libssl1.1_1.1.0l-1~deb9u3_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.0l-1~deb9u3) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../02-readline-common_7.0-3_all.deb ...
	Unpacking readline-common (7.0-3) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.11.0-3+deb9u2_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.11.0-3+deb9u2) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.10.32-0+deb9u1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.10.32-0+deb9u1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.0-2+deb9u3_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.0-2+deb9u3) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.10.32-0+deb9u1_arm64.deb ...
	Unpacking dbus (1.10.32-0+deb9u1) ...
	Selecting previously unselected package libgmp10:arm64.
	Preparing to unpack .../07-libgmp10_2%3a6.1.2+dfsg-1_arm64.deb ...
	Unpacking libgmp10:arm64 (2:6.1.2+dfsg-1) ...
	Selecting previously unselected package libhogweed4:arm64.
	Preparing to unpack .../08-libhogweed4_3.3-1+b2_arm64.deb ...
	Unpacking libhogweed4:arm64 (3.3-1+b2) ...
	Selecting previously unselected package libffi6:arm64.
	Preparing to unpack .../09-libffi6_3.2.1-6_arm64.deb ...
	Unpacking libffi6:arm64 (3.2.1-6) ...
	Selecting previously unselected package libp11-kit0:arm64.
	Preparing to unpack .../10-libp11-kit0_0.23.3-2+deb9u1_arm64.deb ...
	Unpacking libp11-kit0:arm64 (0.23.3-2+deb9u1) ...
	Selecting previously unselected package libtasn1-6:arm64.
	Preparing to unpack .../11-libtasn1-6_4.10-1.1+deb9u1_arm64.deb ...
	Unpacking libtasn1-6:arm64 (4.10-1.1+deb9u1) ...
	Selecting previously unselected package libgnutls30:arm64.
	Preparing to unpack .../12-libgnutls30_3.5.8-5+deb9u5_arm64.deb ...
	Unpacking libgnutls30:arm64 (3.5.8-5+deb9u5) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../13-libsasl2-modules-db_2.1.27~101-g0780600+dfsg-3+deb9u1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../14-libsasl2-2_2.1.27~101-g0780600+dfsg-3+deb9u1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Selecting previously unselected package libicu57:arm64.
	Preparing to unpack .../15-libicu57_57.1-6+deb9u4_arm64.deb ...
	Unpacking libicu57:arm64 (57.1-6+deb9u4) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../16-libxml2_2.9.4+dfsg1-2.2+deb9u5_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-2.2+deb9u5) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../17-dmsetup_2%3a1.02.137-2_arm64.deb ...
	Unpacking dmsetup (2:1.02.137-2) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../18-libdevmapper1.02.1_2%3a1.02.137-2_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.137-2) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../19-libdevmapper-event1.02.1_2%3a1.02.137-2_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.137-2) ...
	Selecting previously unselected package liblvm2cmd2.02:arm64.
	Preparing to unpack .../20-liblvm2cmd2.02_2.02.168-2_arm64.deb ...
	Unpacking liblvm2cmd2.02:arm64 (2.02.168-2) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../21-dmeventd_2%3a1.02.137-2_arm64.deb ...
	Unpacking dmeventd (2:1.02.137-2) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../22-libavahi-common-data_0.6.32-2_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.6.32-2) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../23-libavahi-common3_0.6.32-2_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.6.32-2) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../24-libavahi-client3_0.6.32-2_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.6.32-2) ...
	Selecting previously unselected package liblvm2app2.2:arm64.
	Preparing to unpack .../25-liblvm2app2.2_2.02.168-2_arm64.deb ...
	Unpacking liblvm2app2.2:arm64 (2.02.168-2) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../26-libnl-3-200_3.2.27-2_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.2.27-2) ...
	Selecting previously unselected package libnl-route-3-200:arm64.
	Preparing to unpack .../27-libnl-route-3-200_3.2.27-2_arm64.deb ...
	Unpacking libnl-route-3-200:arm64 (3.2.27-2) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../28-libnuma1_2.0.11-2.1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.11-2.1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../29-libreadline5_5.2+dfsg-3+b1_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3+b1) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../30-libsasl2-modules_2.1.27~101-g0780600+dfsg-3+deb9u1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Selecting previously unselected package libssh2-1:arm64.
	Preparing to unpack .../31-libssh2-1_1.7.0-1+deb9u1_arm64.deb ...
	Unpacking libssh2-1:arm64 (1.7.0-1+deb9u1) ...
	Selecting previously unselected package libfdt1:arm64.
	Preparing to unpack .../32-libfdt1_1.4.2-1_arm64.deb ...
	Unpacking libfdt1:arm64 (1.4.2-1) ...
	Selecting previously unselected package libxenstore3.0:arm64.
	Preparing to unpack .../33-libxenstore3.0_4.8.5.final+shim4.10.4-1+deb9u12_arm64.deb ...
	Unpacking libxenstore3.0:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../34-libyajl2_2.1.0-2+b3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-2+b3) ...
	Selecting previously unselected package libxen-4.8:arm64.
	Preparing to unpack .../35-libxen-4.8_4.8.5.final+shim4.10.4-1+deb9u12_arm64.deb ...
	Unpacking libxen-4.8:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Selecting previously unselected package libvirt0.
	Preparing to unpack .../36-libvirt0_3.0.0-4+deb9u5_arm64.deb ...
	Unpacking libvirt0 (3.0.0-4+deb9u5) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../37-lvm2_2.02.168-2_arm64.deb ...
	Unpacking lvm2 (2.02.168-2) ...
	Selecting previously unselected package xml-core.
	Preparing to unpack .../38-xml-core_0.17_all.deb ...
	Unpacking xml-core (0.17) ...
	Setting up readline-common (7.0-3) ...
	Setting up libexpat1:arm64 (2.2.0-2+deb9u3) ...
	Setting up libnuma1:arm64 (2.0.11-2.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Setting up libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Setting up libxenstore3.0:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Setting up sgml-base (1.29) ...
	Setting up libicu57:arm64 (57.1-6+deb9u4) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-2.2+deb9u5) ...
	Setting up libtasn1-6:arm64 (4.10-1.1+deb9u1) ...
	Setting up libyajl2:arm64 (2.1.0-2+b3) ...
	Setting up libgmp10:arm64 (2:6.1.2+dfsg-1) ...
	Setting up libssh2-1:arm64 (1.7.0-1+deb9u1) ...
	Processing triggers for libc-bin (2.24-11+deb9u4) ...
	Setting up libapparmor1:arm64 (2.11.0-3+deb9u2) ...
	Setting up libssl1.1:arm64 (1.1.0l-1~deb9u3) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.24.1 /usr/local/share/perl/5.24.1 /usr/lib/aarch64-linux-gnu/perl5/5.24 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.24 /usr/share/perl/5.24 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base .) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libffi6:arm64 (3.2.1-6) ...
	Setting up xml-core (0.17) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3+b1) ...
	Setting up libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3+deb9u1) ...
	Setting up libnl-3-200:arm64 (3.2.27-2) ...
	Setting up libdbus-1-3:arm64 (1.10.32-0+deb9u1) ...
	Setting up libavahi-common-data:arm64 (0.6.32-2) ...
	Setting up libfdt1:arm64 (1.4.2-1) ...
	Setting up libnl-route-3-200:arm64 (3.2.27-2) ...
	Setting up libxen-4.8:arm64 (4.8.5.final+shim4.10.4-1+deb9u12) ...
	Setting up libhogweed4:arm64 (3.3-1+b2) ...
	Setting up libp11-kit0:arm64 (0.23.3-2+deb9u1) ...
	Setting up libavahi-common3:arm64 (0.6.32-2) ...
	Setting up dbus (1.10.32-0+deb9u1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libgnutls30:arm64 (3.5.8-5+deb9u5) ...
	Setting up libavahi-client3:arm64 (0.6.32-2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.137-2) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.137-2) ...
	Setting up liblvm2cmd2.02:arm64 (2.02.168-2) ...
	Setting up dmsetup (2:1.02.137-2) ...
	Setting up liblvm2app2.2:arm64 (2.02.168-2) ...
	Setting up dmeventd (2:1.02.137-2) ...
	Setting up lvm2 (2.02.168-2) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Setting up libvirt0 (3.0.0-4+deb9u5) ...
	Processing triggers for libc-bin (2.24-11+deb9u4) ...
	Processing triggers for sgml-base (1.29) ...

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'debian:9' locally
	9: Pulling from library/debian
	0789e4a342a1: Pulling fs layer
	0789e4a342a1: Verifying Checksum
	0789e4a342a1: Download complete
	0789e4a342a1: Pull complete
	Digest: sha256:8afcdd92f29e1706625631df94ecdfe3bdeb919bb2c6ee685803d245b75ee45a
	Status: Downloaded newer image for debian:9
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "debian:9": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (11.91s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (15.35s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (15.353251613s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease [265 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [101 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal/restricted arm64 Packages [1317 B]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/multiverse arm64 Packages [139 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal/universe arm64 Packages [11.1 MB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 Packages [1234 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [988 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/multiverse arm64 Packages [7647 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [1037 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted arm64 Packages [2893 B]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal-backports/main arm64 Packages [2680 B]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-backports/universe arm64 Packages [6301 B]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [632 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted arm64 Packages [2649 B]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [717 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [2378 B]
	Fetched 16.5 MB in 1s (11.4 MB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix readline-common
	  shared-mime-info thin-provisioning-tools tzdata xdg-user-dirs
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix
	  readline-common shared-mime-info thin-provisioning-tools tzdata
	  xdg-user-dirs
	0 upgraded, 56 newly installed, 0 to remove and 11 not upgraded.
	Need to get 19.8 MB of archives.
	After this operation, 79.4 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssl1.1 arm64 1.1.1f-1ubuntu2.4 [1155 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 openssl arm64 1.1.1f-1ubuntu2.4 [599 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 ca-certificates all 20210119~20.04.1 [146 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libapparmor1 arm64 2.13.3-7ubuntu5.1 [32.9 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libdbus-1-3 arm64 1.12.16-2ubuntu2.1 [170 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libexpat1 arm64 2.2.9-1build1 [61.3 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 dbus arm64 1.12.16-2ubuntu2.1 [141 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper1.02.1 arm64 2:1.02.167-1ubuntu1 [110 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmsetup arm64 2:1.02.167-1ubuntu1 [68.5 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-0 arm64 2.64.6-1~ubuntu20.04.3 [1199 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-data all 2.64.6-1~ubuntu20.04.3 [5988 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 tzdata all 2021a-0ubuntu0.20.04 [295 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libicu66 arm64 66.1-2ubuntu2 [8357 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libsqlite3-0 arm64 3.31.1-4ubuntu0.2 [507 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libxml2 arm64 2.9.10+dfsg-5ubuntu0.20.04.1 [572 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 readline-common all 8.0-4 [53.5 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 shared-mime-info arm64 1.15-1 [429 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 xdg-user-dirs arm64 0.17-2ubuntu1 [47.6 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 krb5-locales all 1.17-6ubuntu4.1 [11.4 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5support0 arm64 1.17-6ubuntu4.1 [30.4 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libk5crypto3 arm64 1.17-6ubuntu4.1 [80.4 kB]
	Get:22 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkeyutils1 arm64 1.6-6ubuntu1 [10.1 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5-3 arm64 1.17-6ubuntu4.1 [312 kB]
	Get:24 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libgssapi-krb5-2 arm64 1.17-6ubuntu4.1 [113 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnuma1 arm64 2.0.12-1 [20.5 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libpsl5 arm64 0.21.0-1ubuntu1 [51.3 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 publicsuffix all 20200303.0012-1 [111 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.167-1ubuntu1 [10.9 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libaio1 arm64 0.3.112-5 [7072 B]
	Get:30 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 liblvm2cmd2.03 arm64 2.03.07-1ubuntu1 [576 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmeventd arm64 2:1.02.167-1ubuntu1 [32.0 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libroken18-heimdal arm64 7.7.0+dfsg-1ubuntu1 [39.4 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libasn1-8-heimdal arm64 7.7.0+dfsg-1ubuntu1 [150 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libbrotli1 arm64 1.0.7-6ubuntu0.1 [257 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimbase1-heimdal arm64 7.7.0+dfsg-1ubuntu1 [27.9 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhcrypto4-heimdal arm64 7.7.0+dfsg-1ubuntu1 [86.4 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libwind0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [47.3 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhx509-5-heimdal arm64 7.7.0+dfsg-1ubuntu1 [98.7 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkrb5-26-heimdal arm64 7.7.0+dfsg-1ubuntu1 [191 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimntlm0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [14.7 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libgssapi3-heimdal arm64 7.7.0+dfsg-1ubuntu1 [88.3 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2 [15.1 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2 [48.4 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-common all 2.4.49+dfsg-2ubuntu1.8 [16.6 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-2.4-2 arm64 2.4.49+dfsg-2ubuntu1.8 [145 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnghttp2-14 arm64 1.40.0-1build1 [74.7 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2build1 [53.3 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssh-4 arm64 0.9.3-2ubuntu2.1 [159 kB]
	Get:49 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libcurl3-gnutls arm64 7.68.0-1ubuntu2.5 [212 kB]
	Get:50 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnl-3-200 arm64 3.4.0-1 [51.5 kB]
	Get:51 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libreadline5 arm64 5.2+dfsg-3build3 [94.6 kB]
	Get:52 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2 [46.3 kB]
	Get:53 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libyajl2 arm64 2.1.0-3 [19.3 kB]
	Get:54 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libvirt0 arm64 6.0.0-0ubuntu8.10 [1267 kB]
	Get:55 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 lvm2 arm64 2.03.07-1ubuntu1 [951 kB]
	Get:56 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 thin-provisioning-tools arm64 0.8.5-4build1 [324 kB]
	Fetched 19.8 MB in 1s (33.9 MB/s)
	Selecting previously unselected package libssl1.1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4120 files and directories currently installed.)
	Preparing to unpack .../00-libssl1.1_1.1.1f-1ubuntu2.4_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1f-1ubuntu2.4) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../01-openssl_1.1.1f-1ubuntu2.4_arm64.deb ...
	Unpacking openssl (1.1.1f-1ubuntu2.4) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../02-ca-certificates_20210119~20.04.1_all.deb ...
	Unpacking ca-certificates (20210119~20.04.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.13.3-7ubuntu5.1_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.9-1build1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.9-1build1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking dbus (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../07-libdevmapper1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../08-dmsetup_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmsetup (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../09-libglib2.0-0_2.64.6-1~ubuntu20.04.3_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.3) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../10-libglib2.0-data_2.64.6-1~ubuntu20.04.3_all.deb ...
	Unpacking libglib2.0-data (2.64.6-1~ubuntu20.04.3) ...
	Selecting previously unselected package tzdata.
	Preparing to unpack .../11-tzdata_2021a-0ubuntu0.20.04_all.deb ...
	Unpacking tzdata (2021a-0ubuntu0.20.04) ...
	Selecting previously unselected package libicu66:arm64.
	Preparing to unpack .../12-libicu66_66.1-2ubuntu2_arm64.deb ...
	Unpacking libicu66:arm64 (66.1-2ubuntu2) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../13-libsqlite3-0_3.31.1-4ubuntu0.2_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../14-libxml2_2.9.10+dfsg-5ubuntu0.20.04.1_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../15-readline-common_8.0-4_all.deb ...
	Unpacking readline-common (8.0-4) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../16-shared-mime-info_1.15-1_arm64.deb ...
	Unpacking shared-mime-info (1.15-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../17-xdg-user-dirs_0.17-2ubuntu1_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2ubuntu1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../18-krb5-locales_1.17-6ubuntu4.1_all.deb ...
	Unpacking krb5-locales (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../19-libkrb5support0_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../20-libk5crypto3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../21-libkeyutils1_1.6-6ubuntu1_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../22-libkrb5-3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../23-libgssapi-krb5-2_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../24-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../25-libpsl5_0.21.0-1ubuntu1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../26-publicsuffix_20200303.0012-1_all.deb ...
	Unpacking publicsuffix (20200303.0012-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../27-libdevmapper-event1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../28-libaio1_0.3.112-5_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-5) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../29-liblvm2cmd2.03_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../30-dmeventd_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmeventd (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../31-libroken18-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../32-libasn1-8-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../33-libbrotli1_1.0.7-6ubuntu0.1_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../34-libheimbase1-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../35-libhcrypto4-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../36-libwind0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../37-libhx509-5-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../38-libkrb5-26-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../39-libheimntlm0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../40-libgssapi3-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../41-libsasl2-modules-db_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../42-libsasl2-2_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../43-libldap-common_2.4.49+dfsg-2ubuntu1.8_all.deb ...
	Unpacking libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../44-libldap-2.4-2_2.4.49+dfsg-2ubuntu1.8_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../45-libnghttp2-14_1.40.0-1build1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.40.0-1build1) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../46-librtmp1_2.4+20151223.gitfa8646d.1-2build1_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Selecting previously unselected package libssh-4:arm64.
	Preparing to unpack .../47-libssh-4_0.9.3-2ubuntu2.1_arm64.deb ...
	Unpacking libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../48-libcurl3-gnutls_7.68.0-1ubuntu2.5_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.5) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../49-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../50-libreadline5_5.2+dfsg-3build3_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build3) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../51-libsasl2-modules_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../52-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../53-libvirt0_6.0.0-0ubuntu8.10_arm64.deb ...
	Unpacking libvirt0:arm64 (6.0.0-0ubuntu8.10) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../54-lvm2_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking lvm2 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../55-thin-provisioning-tools_0.8.5-4build1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libexpat1:arm64 (2.2.9-1build1) ...
	Setting up libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Setting up libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Setting up libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Setting up xdg-user-dirs (0.17-2ubuntu1) ...
	Setting up libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.3) ...
	No schema files found: doing nothing.
	Setting up libssl1.1:arm64 (1.1.1f-1ubuntu2.4) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Setting up libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.40.0-1build1) ...
	Setting up krb5-locales (1.17-6ubuntu4.1) ...
	Setting up libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Setting up tzdata (2021a-0ubuntu0.20.04) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Configuring tzdata
	------------------
	
	Please select the geographic area in which you live. Subsequent configuration
	questions will narrow this down by presenting a list of cities, representing
	the time zones in which they are located.
	
	  1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
	  2. America     5. Arctic     8. Europe    11. SystemV
	  3. Antarctica  6. Asia       9. Indian    12. US
	Geographic area: 
	Use of uninitialized value $_[1] in join or string at /usr/share/perl5/Debconf/DbDriver/Stack.pm line 111.
	
	Current default time zone: '/UTC'
	Local time is now:      Thu Jul  8 23:37:24 UTC 2021.
	Universal Time is now:  Thu Jul  8 23:37:24 UTC 2021.
	Run 'dpkg-reconfigure tzdata' if you wish to change it.
	
	Use of uninitialized value $val in substitution (s///) at /usr/share/perl5/Debconf/Format/822.pm line 83, <GEN6> line 4.
	Use of uninitialized value $val in concatenation (.) or string at /usr/share/perl5/Debconf/Format/822.pm line 84, <GEN6> line 4.
	Setting up libglib2.0-data (2.64.6-1~ubuntu20.04.3) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Setting up libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Setting up dbus (1.12.16-2ubuntu2.1) ...
	Setting up libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Setting up libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up dmsetup (2:1.02.167-1ubuntu1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libaio1:arm64 (0.3.112-5) ...
	Setting up openssl (1.1.1f-1ubuntu2.4) ...
	Setting up readline-common (8.0-4) ...
	Setting up publicsuffix (20200303.0012-1) ...
	Setting up libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libicu66:arm64 (66.1-2ubuntu2) ...
	Setting up libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up ca-certificates (20210119~20.04.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Setting up libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Setting up libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up shared-mime-info (1.15-1) ...
	Setting up libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.5) ...
	Setting up libvirt0:arm64 (6.0.0-0ubuntu8.10) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Setting up dmeventd (2:1.02.167-1ubuntu1) ...
	Setting up lvm2 (2.03.07-1ubuntu1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
	Processing triggers for ca-certificates (20210119~20.04.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'ubuntu:latest' locally
	latest: Pulling from library/ubuntu
	6f86eded34a1: Pulling fs layer
	6f86eded34a1: Verifying Checksum
	6f86eded34a1: Download complete
	6f86eded34a1: Pull complete
	Digest: sha256:aba80b77e27148d99c034a987e7da3a287ed455390352663418c0f2ed40417fe
	Status: Downloaded newer image for ubuntu:latest
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "ubuntu:latest": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (15.35s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (14.41s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (14.40724024s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports groovy InRelease [267 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports groovy-updates InRelease [115 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports groovy-backports InRelease [101 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports groovy-security InRelease [110 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 Packages [1727 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports groovy/restricted arm64 Packages [3561 B]
	Get:7 http://ports.ubuntu.com/ubuntu-ports groovy/multiverse arm64 Packages [208 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports groovy/universe arm64 Packages [15.8 MB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 Packages [561 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports groovy-updates/restricted arm64 Packages [3762 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports groovy-updates/universe arm64 Packages [525 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports groovy-updates/multiverse arm64 Packages [2955 B]
	Get:13 http://ports.ubuntu.com/ubuntu-ports groovy-backports/universe arm64 Packages [4856 B]
	Get:14 http://ports.ubuntu.com/ubuntu-ports groovy-backports/main arm64 Packages [2690 B]
	Get:15 http://ports.ubuntu.com/ubuntu-ports groovy-security/restricted arm64 Packages [2654 B]
	Get:16 http://ports.ubuntu.com/ubuntu-ports groovy-security/multiverse arm64 Packages [670 B]
	Get:17 http://ports.ubuntu.com/ubuntu-ports groovy-security/main arm64 Packages [369 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports groovy-security/universe arm64 Packages [396 kB]
	Fetched 20.2 MB in 1s (13.9 MB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup libaio1 libapparmor1 libasn1-8-heimdal
	  libbrotli1 libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1
	  libdevmapper1.02.1 libexpat1 libglib2.0-0 libglib2.0-data libgssapi3-heimdal
	  libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
	  libhx509-5-heimdal libicu67 libkrb5-26-heimdal libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 libreadline5
	  libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libsqlite3-0 libssh-4 libwind0-heimdal libxml2 libyajl2 lvm2 openssl
	  publicsuffix readline-common shared-mime-info thin-provisioning-tools
	  xdg-user-dirs
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus libsasl2-modules-gssapi-mit
	  | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp
	  libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup libaio1 libapparmor1 libasn1-8-heimdal
	  libbrotli1 libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1
	  libdevmapper1.02.1 libexpat1 libglib2.0-0 libglib2.0-data libgssapi3-heimdal
	  libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
	  libhx509-5-heimdal libicu67 libkrb5-26-heimdal libldap-2.4-2 libldap-common
	  liblvm2cmd2.03 libnghttp2-14 libnl-3-200 libnuma1 libpsl5 libreadline5
	  libroken18-heimdal librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db
	  libsqlite3-0 libssh-4 libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2
	  openssl publicsuffix readline-common shared-mime-info
	  thin-provisioning-tools xdg-user-dirs
	0 upgraded, 48 newly installed, 0 to remove and 7 not upgraded.
	Need to get 18.0 MB of archives.
	After this operation, 70.6 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 openssl arm64 1.1.1f-1ubuntu4.4 [600 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 ca-certificates all 20210119~20.10.1 [147 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libapparmor1 arm64 3.0.0-0ubuntu1 [35.2 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libdbus-1-3 arm64 1.12.20-1ubuntu1 [173 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libexpat1 arm64 2.2.9-1build1 [61.3 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 dbus arm64 1.12.20-1ubuntu1 [143 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libdevmapper1.02.1 arm64 2:1.02.167-1ubuntu3 [110 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 dmsetup arm64 2:1.02.167-1ubuntu3 [68.5 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libglib2.0-0 arm64 2.66.1-2ubuntu0.2 [1215 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libglib2.0-data all 2.66.1-2ubuntu0.2 [6440 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libicu67 arm64 67.1-4 [8461 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libsqlite3-0 arm64 3.33.0-1ubuntu0.1 [540 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libxml2 arm64 2.9.10+dfsg-5ubuntu0.20.10.2 [559 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 readline-common all 8.0-4 [53.5 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 shared-mime-info arm64 2.0-1 [427 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 xdg-user-dirs arm64 0.17-2ubuntu2 [47.6 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libnuma1 arm64 2.0.12-1build1 [20.6 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libpsl5 arm64 0.21.0-1.1ubuntu1 [52.0 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 publicsuffix all 20200729.1725-1 [113 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.167-1ubuntu3 [10.9 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libaio1 arm64 0.3.112-8 [7384 B]
	Get:22 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 liblvm2cmd2.03 arm64 2.03.07-1ubuntu3 [575 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 dmeventd arm64 2:1.02.167-1ubuntu3 [32.0 kB]
	Get:24 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libroken18-heimdal arm64 7.7.0+dfsg-2 [39.4 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libasn1-8-heimdal arm64 7.7.0+dfsg-2 [150 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libbrotli1 arm64 1.0.9-2 [267 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libheimbase1-heimdal arm64 7.7.0+dfsg-2 [27.9 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libhcrypto4-heimdal arm64 7.7.0+dfsg-2 [84.8 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libwind0-heimdal arm64 7.7.0+dfsg-2 [47.2 kB]
	Get:30 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libhx509-5-heimdal arm64 7.7.0+dfsg-2 [98.6 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libkrb5-26-heimdal arm64 7.7.0+dfsg-2 [192 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libheimntlm0-heimdal arm64 7.7.0+dfsg-2 [14.8 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libgssapi3-heimdal arm64 7.7.0+dfsg-2 [88.4 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2ubuntu1 [14.9 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2ubuntu1 [48.4 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libldap-2.4-2 arm64 2.4.53+dfsg-1ubuntu1.4 [147 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libnghttp2-14 arm64 1.41.0-3 [64.6 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2build2 [53.1 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libssh-4 arm64 0.9.4-1ubuntu3 [161 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libcurl3-gnutls arm64 7.68.0-1ubuntu4.3 [212 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libldap-common all 2.4.53+dfsg-1ubuntu1.4 [17.7 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libnl-3-200 arm64 3.4.0-1 [51.5 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libreadline5 arm64 5.2+dfsg-3build3 [94.6 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2ubuntu1 [46.2 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 libyajl2 arm64 2.1.0-3 [19.3 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports groovy-updates/main arm64 libvirt0 arm64 6.6.0-1ubuntu3.5 [1348 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 lvm2 arm64 2.03.07-1ubuntu3 [951 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports groovy/main arm64 thin-provisioning-tools arm64 0.8.5-4build1 [324 kB]
	Fetched 18.0 MB in 1s (32.1 MB/s)
	Selecting previously unselected package openssl.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4258 files and directories currently installed.)
	Preparing to unpack .../00-openssl_1.1.1f-1ubuntu4.4_arm64.deb ...
	Unpacking openssl (1.1.1f-1ubuntu4.4) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../01-ca-certificates_20210119~20.10.1_all.deb ...
	Unpacking ca-certificates (20210119~20.10.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../02-libapparmor1_3.0.0-0ubuntu1_arm64.deb ...
	Unpacking libapparmor1:arm64 (3.0.0-0ubuntu1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../03-libdbus-1-3_1.12.20-1ubuntu1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.20-1ubuntu1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../04-libexpat1_2.2.9-1build1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.9-1build1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../05-dbus_1.12.20-1ubuntu1_arm64.deb ...
	Unpacking dbus (1.12.20-1ubuntu1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../06-libdevmapper1.02.1_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../07-dmsetup_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking dmsetup (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../08-libglib2.0-0_2.66.1-2ubuntu0.2_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.66.1-2ubuntu0.2) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../09-libglib2.0-data_2.66.1-2ubuntu0.2_all.deb ...
	Unpacking libglib2.0-data (2.66.1-2ubuntu0.2) ...
	Selecting previously unselected package libicu67:arm64.
	Preparing to unpack .../10-libicu67_67.1-4_arm64.deb ...
	Unpacking libicu67:arm64 (67.1-4) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../11-libsqlite3-0_3.33.0-1ubuntu0.1_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.33.0-1ubuntu0.1) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../12-libxml2_2.9.10+dfsg-5ubuntu0.20.10.2_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.10.2) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../13-readline-common_8.0-4_all.deb ...
	Unpacking readline-common (8.0-4) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../14-shared-mime-info_2.0-1_arm64.deb ...
	Unpacking shared-mime-info (2.0-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../15-xdg-user-dirs_0.17-2ubuntu2_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2ubuntu2) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../16-libnuma1_2.0.12-1build1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1build1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../17-libpsl5_0.21.0-1.1ubuntu1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1.1ubuntu1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../18-publicsuffix_20200729.1725-1_all.deb ...
	Unpacking publicsuffix (20200729.1725-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../19-libdevmapper-event1.02.1_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../20-libaio1_0.3.112-8_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-8) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../21-liblvm2cmd2.03_2.03.07-1ubuntu3_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.07-1ubuntu3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../22-dmeventd_2%3a1.02.167-1ubuntu3_arm64.deb ...
	Unpacking dmeventd (2:1.02.167-1ubuntu3) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../23-libroken18-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../24-libasn1-8-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../25-libbrotli1_1.0.9-2_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.9-2) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../26-libheimbase1-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../27-libhcrypto4-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../28-libwind0-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../29-libhx509-5-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../30-libkrb5-26-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../31-libheimntlm0-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../32-libgssapi3-heimdal_7.7.0+dfsg-2_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.7.0+dfsg-2) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../33-libsasl2-modules-db_2.1.27+dfsg-2ubuntu1_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../34-libsasl2-2_2.1.27+dfsg-2ubuntu1_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../35-libldap-2.4-2_2.4.53+dfsg-1ubuntu1.4_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.53+dfsg-1ubuntu1.4) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../36-libnghttp2-14_1.41.0-3_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.41.0-3) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../37-librtmp1_2.4+20151223.gitfa8646d.1-2build2_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build2) ...
	Selecting previously unselected package libssh-4:arm64.
	Preparing to unpack .../38-libssh-4_0.9.4-1ubuntu3_arm64.deb ...
	Unpacking libssh-4:arm64 (0.9.4-1ubuntu3) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../39-libcurl3-gnutls_7.68.0-1ubuntu4.3_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.68.0-1ubuntu4.3) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../40-libldap-common_2.4.53+dfsg-1ubuntu1.4_all.deb ...
	Unpacking libldap-common (2.4.53+dfsg-1ubuntu1.4) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../41-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../42-libreadline5_5.2+dfsg-3build3_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build3) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../43-libsasl2-modules_2.1.27+dfsg-2ubuntu1_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../44-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../45-libvirt0_6.6.0-1ubuntu3.5_arm64.deb ...
	Unpacking libvirt0:arm64 (6.6.0-1ubuntu3.5) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../46-lvm2_2.03.07-1ubuntu3_arm64.deb ...
	Unpacking lvm2 (2.03.07-1ubuntu3) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../47-thin-provisioning-tools_0.8.5-4build1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libexpat1:arm64 (2.2.9-1build1) ...
	Setting up libapparmor1:arm64 (3.0.0-0ubuntu1) ...
	Setting up libpsl5:arm64 (0.21.0-1.1ubuntu1) ...
	Setting up libicu67:arm64 (67.1-4) ...
	Setting up xdg-user-dirs (0.17-2ubuntu2) ...
	Setting up libglib2.0-0:arm64 (2.66.1-2ubuntu0.2) ...
	No schema files found: doing nothing.
	Setting up libbrotli1:arm64 (1.0.9-2) ...
	Setting up libsqlite3-0:arm64 (3.33.0-1ubuntu0.1) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.41.0-3) ...
	Setting up libldap-common (2.4.53+dfsg-1ubuntu1.4) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Setting up libglib2.0-data (2.66.1-2ubuntu0.2) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build2) ...
	Setting up libdbus-1-3:arm64 (1.12.20-1ubuntu1) ...
	Setting up dbus (1.12.20-1ubuntu1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2ubuntu1) ...
	Setting up libssh-4:arm64 (0.9.4-1ubuntu3) ...
	Setting up libroken18-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Setting up libnuma1:arm64 (2.0.12-1build1) ...
	Setting up dmsetup (2:1.02.167-1ubuntu3) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libaio1:arm64 (0.3.112-8) ...
	Setting up openssl (1.1.1f-1ubuntu4.4) ...
	Setting up readline-common (8.0-4) ...
	Setting up publicsuffix (20200729.1725-1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.10.2) ...
	Setting up libheimbase1-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu3) ...
	Setting up libasn1-8-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libhcrypto4-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up ca-certificates (20210119~20.10.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.3 /usr/local/share/perl/5.30.3 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl-base /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libwind0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up shared-mime-info (2.0-1) ...
	Setting up thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libhx509-5-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libkrb5-26-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libheimntlm0-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libgssapi3-heimdal:arm64 (7.7.0+dfsg-2) ...
	Setting up libldap-2.4-2:arm64 (2.4.53+dfsg-1ubuntu1.4) ...
	Setting up libcurl3-gnutls:arm64 (7.68.0-1ubuntu4.3) ...
	Setting up libvirt0:arm64 (6.6.0-1ubuntu3.5) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.07-1ubuntu3) ...
	Setting up dmeventd (2:1.02.167-1ubuntu3) ...
	Setting up lvm2 (2.03.07-1ubuntu3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.32-0ubuntu3) ...
	Processing triggers for ca-certificates (20210119~20.10.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'ubuntu:20.10' locally
	20.10: Pulling from library/ubuntu
	1e83729d1623: Pulling fs layer
	1e83729d1623: Verifying Checksum
	1e83729d1623: Download complete
	1e83729d1623: Pull complete
	Digest: sha256:6b603b0f3b8fc71b1a97bd38e081e8df04793f1447362c12385b48106aaded3f
	Status: Downloaded newer image for ubuntu:20.10
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "ubuntu:20.10": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (14.41s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (12.93s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (12.929903875s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease [265 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [101 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal/restricted arm64 Packages [1317 B]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/multiverse arm64 Packages [139 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal/universe arm64 Packages [11.1 MB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 Packages [1234 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [1037 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted arm64 Packages [2893 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/multiverse arm64 Packages [7647 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [988 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal-backports/main arm64 Packages [2680 B]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-backports/universe arm64 Packages [6301 B]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [717 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [632 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted arm64 Packages [2649 B]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [2378 B]
	Fetched 16.5 MB in 1s (14.0 MB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix readline-common
	  shared-mime-info thin-provisioning-tools tzdata xdg-user-dirs
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libaio1 libapparmor1
	  libasn1-8-heimdal libbrotli1 libcurl3-gnutls libdbus-1-3
	  libdevmapper-event1.02.1 libdevmapper1.02.1 libexpat1 libglib2.0-0
	  libglib2.0-data libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu66
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2cmd2.03 libnghttp2-14 libnl-3-200
	  libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1 libsasl2-2
	  libsasl2-modules libsasl2-modules-db libsqlite3-0 libssh-4 libssl1.1
	  libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix
	  readline-common shared-mime-info thin-provisioning-tools tzdata
	  xdg-user-dirs
	0 upgraded, 56 newly installed, 0 to remove and 11 not upgraded.
	Need to get 19.8 MB of archives.
	After this operation, 79.4 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssl1.1 arm64 1.1.1f-1ubuntu2.4 [1155 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 openssl arm64 1.1.1f-1ubuntu2.4 [599 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 ca-certificates all 20210119~20.04.1 [146 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libapparmor1 arm64 2.13.3-7ubuntu5.1 [32.9 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libdbus-1-3 arm64 1.12.16-2ubuntu2.1 [170 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libexpat1 arm64 2.2.9-1build1 [61.3 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 dbus arm64 1.12.16-2ubuntu2.1 [141 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper1.02.1 arm64 2:1.02.167-1ubuntu1 [110 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmsetup arm64 2:1.02.167-1ubuntu1 [68.5 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-0 arm64 2.64.6-1~ubuntu20.04.3 [1199 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libglib2.0-data all 2.64.6-1~ubuntu20.04.3 [5988 B]
	Get:12 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 tzdata all 2021a-0ubuntu0.20.04 [295 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libicu66 arm64 66.1-2ubuntu2 [8357 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libsqlite3-0 arm64 3.31.1-4ubuntu0.2 [507 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libxml2 arm64 2.9.10+dfsg-5ubuntu0.20.04.1 [572 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 readline-common all 8.0-4 [53.5 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 shared-mime-info arm64 1.15-1 [429 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 xdg-user-dirs arm64 0.17-2ubuntu1 [47.6 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 krb5-locales all 1.17-6ubuntu4.1 [11.4 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5support0 arm64 1.17-6ubuntu4.1 [30.4 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libk5crypto3 arm64 1.17-6ubuntu4.1 [80.4 kB]
	Get:22 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkeyutils1 arm64 1.6-6ubuntu1 [10.1 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libkrb5-3 arm64 1.17-6ubuntu4.1 [312 kB]
	Get:24 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libgssapi-krb5-2 arm64 1.17-6ubuntu4.1 [113 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnuma1 arm64 2.0.12-1 [20.5 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libpsl5 arm64 0.21.0-1ubuntu1 [51.3 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 publicsuffix all 20200303.0012-1 [111 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.167-1ubuntu1 [10.9 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libaio1 arm64 0.3.112-5 [7072 B]
	Get:30 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 liblvm2cmd2.03 arm64 2.03.07-1ubuntu1 [576 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 dmeventd arm64 2:1.02.167-1ubuntu1 [32.0 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libroken18-heimdal arm64 7.7.0+dfsg-1ubuntu1 [39.4 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libasn1-8-heimdal arm64 7.7.0+dfsg-1ubuntu1 [150 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libbrotli1 arm64 1.0.7-6ubuntu0.1 [257 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimbase1-heimdal arm64 7.7.0+dfsg-1ubuntu1 [27.9 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhcrypto4-heimdal arm64 7.7.0+dfsg-1ubuntu1 [86.4 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libwind0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [47.3 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libhx509-5-heimdal arm64 7.7.0+dfsg-1ubuntu1 [98.7 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libkrb5-26-heimdal arm64 7.7.0+dfsg-1ubuntu1 [191 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libheimntlm0-heimdal arm64 7.7.0+dfsg-1ubuntu1 [14.7 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libgssapi3-heimdal arm64 7.7.0+dfsg-1ubuntu1 [88.3 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules-db arm64 2.1.27+dfsg-2 [15.1 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-2 arm64 2.1.27+dfsg-2 [48.4 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-common all 2.4.49+dfsg-2ubuntu1.8 [16.6 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libldap-2.4-2 arm64 2.4.49+dfsg-2ubuntu1.8 [145 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnghttp2-14 arm64 1.40.0-1build1 [74.7 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2build1 [53.3 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libssh-4 arm64 0.9.3-2ubuntu2.1 [159 kB]
	Get:49 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libcurl3-gnutls arm64 7.68.0-1ubuntu2.5 [212 kB]
	Get:50 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libnl-3-200 arm64 3.4.0-1 [51.5 kB]
	Get:51 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libreadline5 arm64 5.2+dfsg-3build3 [94.6 kB]
	Get:52 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libsasl2-modules arm64 2.1.27+dfsg-2 [46.3 kB]
	Get:53 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 libyajl2 arm64 2.1.0-3 [19.3 kB]
	Get:54 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 libvirt0 arm64 6.0.0-0ubuntu8.10 [1267 kB]
	Get:55 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 lvm2 arm64 2.03.07-1ubuntu1 [951 kB]
	Get:56 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 thin-provisioning-tools arm64 0.8.5-4build1 [324 kB]
	Fetched 19.8 MB in 1s (33.5 MB/s)
	Selecting previously unselected package libssl1.1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4120 files and directories currently installed.)
	Preparing to unpack .../00-libssl1.1_1.1.1f-1ubuntu2.4_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1f-1ubuntu2.4) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../01-openssl_1.1.1f-1ubuntu2.4_arm64.deb ...
	Unpacking openssl (1.1.1f-1ubuntu2.4) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../02-ca-certificates_20210119~20.04.1_all.deb ...
	Unpacking ca-certificates (20210119~20.04.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.13.3-7ubuntu5.1_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.9-1build1_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.9-1build1) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.12.16-2ubuntu2.1_arm64.deb ...
	Unpacking dbus (1.12.16-2ubuntu2.1) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../07-libdevmapper1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../08-dmsetup_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmsetup (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libglib2.0-0:arm64.
	Preparing to unpack .../09-libglib2.0-0_2.64.6-1~ubuntu20.04.3_arm64.deb ...
	Unpacking libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.3) ...
	Selecting previously unselected package libglib2.0-data.
	Preparing to unpack .../10-libglib2.0-data_2.64.6-1~ubuntu20.04.3_all.deb ...
	Unpacking libglib2.0-data (2.64.6-1~ubuntu20.04.3) ...
	Selecting previously unselected package tzdata.
	Preparing to unpack .../11-tzdata_2021a-0ubuntu0.20.04_all.deb ...
	Unpacking tzdata (2021a-0ubuntu0.20.04) ...
	Selecting previously unselected package libicu66:arm64.
	Preparing to unpack .../12-libicu66_66.1-2ubuntu2_arm64.deb ...
	Unpacking libicu66:arm64 (66.1-2ubuntu2) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../13-libsqlite3-0_3.31.1-4ubuntu0.2_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../14-libxml2_2.9.10+dfsg-5ubuntu0.20.04.1_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../15-readline-common_8.0-4_all.deb ...
	Unpacking readline-common (8.0-4) ...
	Selecting previously unselected package shared-mime-info.
	Preparing to unpack .../16-shared-mime-info_1.15-1_arm64.deb ...
	Unpacking shared-mime-info (1.15-1) ...
	Selecting previously unselected package xdg-user-dirs.
	Preparing to unpack .../17-xdg-user-dirs_0.17-2ubuntu1_arm64.deb ...
	Unpacking xdg-user-dirs (0.17-2ubuntu1) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../18-krb5-locales_1.17-6ubuntu4.1_all.deb ...
	Unpacking krb5-locales (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../19-libkrb5support0_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../20-libk5crypto3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../21-libkeyutils1_1.6-6ubuntu1_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../22-libkrb5-3_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../23-libgssapi-krb5-2_1.17-6ubuntu4.1_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../24-libnuma1_2.0.12-1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.12-1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../25-libpsl5_0.21.0-1ubuntu1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../26-publicsuffix_20200303.0012-1_all.deb ...
	Unpacking publicsuffix (20200303.0012-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../27-libdevmapper-event1.02.1_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libaio1:arm64.
	Preparing to unpack .../28-libaio1_0.3.112-5_arm64.deb ...
	Unpacking libaio1:arm64 (0.3.112-5) ...
	Selecting previously unselected package liblvm2cmd2.03:arm64.
	Preparing to unpack .../29-liblvm2cmd2.03_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../30-dmeventd_2%3a1.02.167-1ubuntu1_arm64.deb ...
	Unpacking dmeventd (2:1.02.167-1ubuntu1) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../31-libroken18-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../32-libasn1-8-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libbrotli1:arm64.
	Preparing to unpack .../33-libbrotli1_1.0.7-6ubuntu0.1_arm64.deb ...
	Unpacking libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../34-libheimbase1-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../35-libhcrypto4-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../36-libwind0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../37-libhx509-5-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../38-libkrb5-26-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../39-libheimntlm0-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../40-libgssapi3-heimdal_7.7.0+dfsg-1ubuntu1_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../41-libsasl2-modules-db_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../42-libsasl2-2_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../43-libldap-common_2.4.49+dfsg-2ubuntu1.8_all.deb ...
	Unpacking libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../44-libldap-2.4-2_2.4.49+dfsg-2ubuntu1.8_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../45-libnghttp2-14_1.40.0-1build1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.40.0-1build1) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../46-librtmp1_2.4+20151223.gitfa8646d.1-2build1_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Selecting previously unselected package libssh-4:arm64.
	Preparing to unpack .../47-libssh-4_0.9.3-2ubuntu2.1_arm64.deb ...
	Unpacking libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../48-libcurl3-gnutls_7.68.0-1ubuntu2.5_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.5) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../49-libnl-3-200_3.4.0-1_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.4.0-1) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../50-libreadline5_5.2+dfsg-3build3_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build3) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../51-libsasl2-modules_2.1.27+dfsg-2_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../52-libyajl2_2.1.0-3_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-3) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../53-libvirt0_6.0.0-0ubuntu8.10_arm64.deb ...
	Unpacking libvirt0:arm64 (6.0.0-0ubuntu8.10) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../54-lvm2_2.03.07-1ubuntu1_arm64.deb ...
	Unpacking lvm2 (2.03.07-1ubuntu1) ...
	Selecting previously unselected package thin-provisioning-tools.
	Preparing to unpack .../55-thin-provisioning-tools_0.8.5-4build1_arm64.deb ...
	Unpacking thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libexpat1:arm64 (2.2.9-1build1) ...
	Setting up libkeyutils1:arm64 (1.6-6ubuntu1) ...
	Setting up libapparmor1:arm64 (2.13.3-7ubuntu5.1) ...
	Setting up libpsl5:arm64 (0.21.0-1ubuntu1) ...
	Setting up xdg-user-dirs (0.17-2ubuntu1) ...
	Setting up libglib2.0-0:arm64 (2.64.6-1~ubuntu20.04.3) ...
	No schema files found: doing nothing.
	Setting up libssl1.1:arm64 (1.1.1f-1ubuntu2.4) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libbrotli1:arm64 (1.0.7-6ubuntu0.1) ...
	Setting up libsqlite3-0:arm64 (3.31.1-4ubuntu0.2) ...
	Setting up libsasl2-modules:arm64 (2.1.27+dfsg-2) ...
	Setting up libyajl2:arm64 (2.1.0-3) ...
	Setting up libnghttp2-14:arm64 (1.40.0-1build1) ...
	Setting up krb5-locales (1.17-6ubuntu4.1) ...
	Setting up libldap-common (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libkrb5support0:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27+dfsg-2) ...
	Setting up tzdata (2021a-0ubuntu0.20.04) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Configuring tzdata
	------------------
	
	Please select the geographic area in which you live. Subsequent configuration
	questions will narrow this down by presenting a list of cities, representing
	the time zones in which they are located.
	
	  1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
	  2. America     5. Arctic     8. Europe    11. SystemV
	  3. Antarctica  6. Asia       9. Indian    12. US
	Geographic area: 
	Use of uninitialized value $_[1] in join or string at /usr/share/perl5/Debconf/DbDriver/Stack.pm line 111.
	
	Current default time zone: '/UTC'
	Local time is now:      Thu Jul  8 23:37:51 UTC 2021.
	Universal Time is now:  Thu Jul  8 23:37:51 UTC 2021.
	Run 'dpkg-reconfigure tzdata' if you wish to change it.
	
	Use of uninitialized value $val in substitution (s///) at /usr/share/perl5/Debconf/Format/822.pm line 83, <GEN6> line 4.
	Use of uninitialized value $val in concatenation (.) or string at /usr/share/perl5/Debconf/Format/822.pm line 84, <GEN6> line 4.
	Setting up libglib2.0-data (2.64.6-1~ubuntu20.04.3) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2build1) ...
	Setting up libdbus-1-3:arm64 (1.12.16-2ubuntu2.1) ...
	Setting up dbus (1.12.16-2ubuntu2.1) ...
	Setting up libk5crypto3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libsasl2-2:arm64 (2.1.27+dfsg-2) ...
	Setting up libroken18-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libnuma1:arm64 (2.0.12-1) ...
	Setting up dmsetup (2:1.02.167-1ubuntu1) ...
	Setting up libnl-3-200:arm64 (3.4.0-1) ...
	Setting up libkrb5-3:arm64 (1.17-6ubuntu4.1) ...
	Setting up libaio1:arm64 (0.3.112-5) ...
	Setting up openssl (1.1.1f-1ubuntu2.4) ...
	Setting up readline-common (8.0-4) ...
	Setting up publicsuffix (20200303.0012-1) ...
	Setting up libheimbase1-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build3) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.167-1ubuntu1) ...
	Setting up libicu66:arm64 (66.1-2ubuntu2) ...
	Setting up libasn1-8-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libhcrypto4-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up ca-certificates (20210119~20.04.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.30.0 /usr/local/share/perl/5.30.0 /usr/lib/aarch64-linux-gnu/perl5/5.30 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.30 /usr/share/perl/5.30 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libwind0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up thin-provisioning-tools (0.8.5-4build1) ...
	Setting up libgssapi-krb5-2:arm64 (1.17-6ubuntu4.1) ...
	Setting up libssh-4:arm64 (0.9.3-2ubuntu2.1) ...
	Setting up libxml2:arm64 (2.9.10+dfsg-5ubuntu0.20.04.1) ...
	Setting up libhx509-5-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up shared-mime-info (1.15-1) ...
	Setting up libkrb5-26-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libheimntlm0-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libgssapi3-heimdal:arm64 (7.7.0+dfsg-1ubuntu1) ...
	Setting up libldap-2.4-2:arm64 (2.4.49+dfsg-2ubuntu1.8) ...
	Setting up libcurl3-gnutls:arm64 (7.68.0-1ubuntu2.5) ...
	Setting up libvirt0:arm64 (6.0.0-0ubuntu8.10) ...
	Setting up liblvm2cmd2.03:arm64 (2.03.07-1ubuntu1) ...
	Setting up dmeventd (2:1.02.167-1ubuntu1) ...
	Setting up lvm2 (2.03.07-1ubuntu1) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
	Processing triggers for ca-certificates (20210119~20.04.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to find image 'ubuntu:20.04' locally
	20.04: Pulling from library/ubuntu
	Digest: sha256:aba80b77e27148d99c034a987e7da3a287ed455390352663418c0f2ed40417fe
	Status: Downloaded newer image for ubuntu:20.04
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "ubuntu:20.04": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (12.93s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (10.98s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Non-zero exit: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": exit status 1 (10.977608182s)

                                                
                                                
-- stdout --
	Get:1 http://ports.ubuntu.com/ubuntu-ports bionic InRelease [242 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease [88.7 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease [74.6 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease [88.7 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 Packages [1285 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 Packages [11.0 MB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports bionic/multiverse arm64 Packages [153 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports bionic/restricted arm64 Packages [572 B]
	Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 Packages [1635 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/restricted arm64 Packages [3771 B]
	Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 Packages [1933 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports bionic-updates/multiverse arm64 Packages [5320 B]
	Get:13 http://ports.ubuntu.com/ubuntu-ports bionic-backports/universe arm64 Packages [11.0 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-backports/main arm64 Packages [11.2 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports bionic-security/main arm64 Packages [1248 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports bionic-security/restricted arm64 Packages [3099 B]
	Get:17 http://ports.ubuntu.com/ubuntu-ports bionic-security/universe arm64 Packages [1244 kB]
	Get:18 http://ports.ubuntu.com/ubuntu-ports bionic-security/multiverse arm64 Packages [2824 B]
	Fetched 19.0 MB in 2s (11.2 MB/s)
	Reading package lists...
	Reading package lists...
	Building dependency tree...
	Reading state information...
	The following additional packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libapparmor1
	  libasn1-8-heimdal libavahi-client3 libavahi-common-data libavahi-common3
	  libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu60
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2app2.2 liblvm2cmd2.02 libnghttp2-14
	  libnl-3-200 libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libssl1.1
	  libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix readline-common
	Suggested packages:
	  default-dbus-session-bus | dbus-session-bus krb5-doc krb5-user
	  libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
	  libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql
	  thin-provisioning-tools readline-doc
	The following NEW packages will be installed:
	  ca-certificates dbus dmeventd dmsetup krb5-locales libapparmor1
	  libasn1-8-heimdal libavahi-client3 libavahi-common-data libavahi-common3
	  libcurl3-gnutls libdbus-1-3 libdevmapper-event1.02.1 libdevmapper1.02.1
	  libexpat1 libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal
	  libheimbase1-heimdal libheimntlm0-heimdal libhx509-5-heimdal libicu60
	  libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0
	  libldap-2.4-2 libldap-common liblvm2app2.2 liblvm2cmd2.02 libnghttp2-14
	  libnl-3-200 libnuma1 libpsl5 libreadline5 libroken18-heimdal librtmp1
	  libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libssl1.1
	  libvirt0 libwind0-heimdal libxml2 libyajl2 lvm2 openssl publicsuffix
	  readline-common
	0 upgraded, 51 newly installed, 0 to remove and 6 not upgraded.
	Need to get 16.2 MB of archives.
	After this operation, 62.3 MB of additional disk space will be used.
	Get:1 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libssl1.1 arm64 1.1.1-1ubuntu2.1~18.04.9 [1062 kB]
	Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 openssl arm64 1.1.1-1ubuntu2.1~18.04.9 [583 kB]
	Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 ca-certificates all 20210119~18.04.1 [147 kB]
	Get:4 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libapparmor1 arm64 2.12-4ubuntu5.1 [28.4 kB]
	Get:5 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libdbus-1-3 arm64 1.12.2-1ubuntu1.2 [152 kB]
	Get:6 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libexpat1 arm64 2.2.5-3ubuntu0.2 [69.3 kB]
	Get:7 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 dbus arm64 1.12.2-1ubuntu1.2 [130 kB]
	Get:8 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libdevmapper1.02.1 arm64 2:1.02.145-4.1ubuntu3.18.04.3 [100 kB]
	Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 dmsetup arm64 2:1.02.145-4.1ubuntu3.18.04.3 [65.1 kB]
	Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libicu60 arm64 60.2-3ubuntu3.1 [7987 kB]
	Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsqlite3-0 arm64 3.22.0-1ubuntu0.4 [430 kB]
	Get:12 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libxml2 arm64 2.9.4+dfsg1-6.1ubuntu1.4 [548 kB]
	Get:13 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 readline-common all 7.0-3 [52.9 kB]
	Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 krb5-locales all 1.16-2ubuntu0.2 [13.4 kB]
	Get:15 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libkrb5support0 arm64 1.16-2ubuntu0.2 [28.1 kB]
	Get:16 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libk5crypto3 arm64 1.16-2ubuntu0.2 [79.9 kB]
	Get:17 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libkeyutils1 arm64 1.5.9-9.2ubuntu2 [8112 B]
	Get:18 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libkrb5-3 arm64 1.16-2ubuntu0.2 [241 kB]
	Get:19 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libgssapi-krb5-2 arm64 1.16-2ubuntu0.2 [103 kB]
	Get:20 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libnuma1 arm64 2.0.11-2.1ubuntu0.1 [19.4 kB]
	Get:21 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libpsl5 arm64 0.19.1-5build1 [40.9 kB]
	Get:22 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 publicsuffix all 20180223.1310-1 [97.6 kB]
	Get:23 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libdevmapper-event1.02.1 arm64 2:1.02.145-4.1ubuntu3.18.04.3 [9444 B]
	Get:24 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblvm2cmd2.02 arm64 2.02.176-4.1ubuntu3.18.04.3 [471 kB]
	Get:25 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 dmeventd arm64 2:1.02.145-4.1ubuntu3.18.04.3 [25.9 kB]
	Get:26 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libroken18-heimdal arm64 7.5.0+dfsg-1 [35.4 kB]
	Get:27 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libasn1-8-heimdal arm64 7.5.0+dfsg-1 [130 kB]
	Get:28 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libavahi-common-data arm64 0.7-3.1ubuntu1.3 [22.2 kB]
	Get:29 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libavahi-common3 arm64 0.7-3.1ubuntu1.3 [18.4 kB]
	Get:30 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libavahi-client3 arm64 0.7-3.1ubuntu1.3 [21.9 kB]
	Get:31 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libheimbase1-heimdal arm64 7.5.0+dfsg-1 [24.9 kB]
	Get:32 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libhcrypto4-heimdal arm64 7.5.0+dfsg-1 [76.4 kB]
	Get:33 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libwind0-heimdal arm64 7.5.0+dfsg-1 [47.0 kB]
	Get:34 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libhx509-5-heimdal arm64 7.5.0+dfsg-1 [88.5 kB]
	Get:35 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libkrb5-26-heimdal arm64 7.5.0+dfsg-1 [170 kB]
	Get:36 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libheimntlm0-heimdal arm64 7.5.0+dfsg-1 [13.3 kB]
	Get:37 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libgssapi3-heimdal arm64 7.5.0+dfsg-1 [79.1 kB]
	Get:38 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsasl2-modules-db arm64 2.1.27~101-g0780600+dfsg-3ubuntu2.3 [13.6 kB]
	Get:39 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsasl2-2 arm64 2.1.27~101-g0780600+dfsg-3ubuntu2.3 [43.2 kB]
	Get:40 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libldap-common all 2.4.45+dfsg-1ubuntu1.10 [15.8 kB]
	Get:41 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libldap-2.4-2 arm64 2.4.45+dfsg-1ubuntu1.10 [131 kB]
	Get:42 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libnghttp2-14 arm64 1.30.0-1ubuntu1 [68.9 kB]
	Get:43 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-1 [48.2 kB]
	Get:44 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libcurl3-gnutls arm64 7.58.0-2ubuntu3.13 [183 kB]
	Get:45 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblvm2app2.2 arm64 2.02.176-4.1ubuntu3.18.04.3 [346 kB]
	Get:46 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libnl-3-200 arm64 3.2.29-0ubuntu3 [44.4 kB]
	Get:47 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libreadline5 arm64 5.2+dfsg-3build1 [82.1 kB]
	Get:48 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libsasl2-modules arm64 2.1.27~101-g0780600+dfsg-3ubuntu2.3 [42.0 kB]
	Get:49 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libyajl2 arm64 2.1.0-2build1 [17.7 kB]
	Get:50 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 libvirt0 arm64 4.0.0-1ubuntu8.19 [1182 kB]
	Get:51 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 lvm2 arm64 2.02.176-4.1ubuntu3.18.04.3 [811 kB]
	Fetched 16.2 MB in 1s (30.3 MB/s)
	Selecting previously unselected package libssl1.1:arm64.
	(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4044 files and directories currently installed.)
	Preparing to unpack .../00-libssl1.1_1.1.1-1ubuntu2.1~18.04.9_arm64.deb ...
	Unpacking libssl1.1:arm64 (1.1.1-1ubuntu2.1~18.04.9) ...
	Selecting previously unselected package openssl.
	Preparing to unpack .../01-openssl_1.1.1-1ubuntu2.1~18.04.9_arm64.deb ...
	Unpacking openssl (1.1.1-1ubuntu2.1~18.04.9) ...
	Selecting previously unselected package ca-certificates.
	Preparing to unpack .../02-ca-certificates_20210119~18.04.1_all.deb ...
	Unpacking ca-certificates (20210119~18.04.1) ...
	Selecting previously unselected package libapparmor1:arm64.
	Preparing to unpack .../03-libapparmor1_2.12-4ubuntu5.1_arm64.deb ...
	Unpacking libapparmor1:arm64 (2.12-4ubuntu5.1) ...
	Selecting previously unselected package libdbus-1-3:arm64.
	Preparing to unpack .../04-libdbus-1-3_1.12.2-1ubuntu1.2_arm64.deb ...
	Unpacking libdbus-1-3:arm64 (1.12.2-1ubuntu1.2) ...
	Selecting previously unselected package libexpat1:arm64.
	Preparing to unpack .../05-libexpat1_2.2.5-3ubuntu0.2_arm64.deb ...
	Unpacking libexpat1:arm64 (2.2.5-3ubuntu0.2) ...
	Selecting previously unselected package dbus.
	Preparing to unpack .../06-dbus_1.12.2-1ubuntu1.2_arm64.deb ...
	Unpacking dbus (1.12.2-1ubuntu1.2) ...
	Selecting previously unselected package libdevmapper1.02.1:arm64.
	Preparing to unpack .../07-libdevmapper1.02.1_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking libdevmapper1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package dmsetup.
	Preparing to unpack .../08-dmsetup_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking dmsetup (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package libicu60:arm64.
	Preparing to unpack .../09-libicu60_60.2-3ubuntu3.1_arm64.deb ...
	Unpacking libicu60:arm64 (60.2-3ubuntu3.1) ...
	Selecting previously unselected package libsqlite3-0:arm64.
	Preparing to unpack .../10-libsqlite3-0_3.22.0-1ubuntu0.4_arm64.deb ...
	Unpacking libsqlite3-0:arm64 (3.22.0-1ubuntu0.4) ...
	Selecting previously unselected package libxml2:arm64.
	Preparing to unpack .../11-libxml2_2.9.4+dfsg1-6.1ubuntu1.4_arm64.deb ...
	Unpacking libxml2:arm64 (2.9.4+dfsg1-6.1ubuntu1.4) ...
	Selecting previously unselected package readline-common.
	Preparing to unpack .../12-readline-common_7.0-3_all.deb ...
	Unpacking readline-common (7.0-3) ...
	Selecting previously unselected package krb5-locales.
	Preparing to unpack .../13-krb5-locales_1.16-2ubuntu0.2_all.deb ...
	Unpacking krb5-locales (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libkrb5support0:arm64.
	Preparing to unpack .../14-libkrb5support0_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libkrb5support0:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libk5crypto3:arm64.
	Preparing to unpack .../15-libk5crypto3_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libk5crypto3:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libkeyutils1:arm64.
	Preparing to unpack .../16-libkeyutils1_1.5.9-9.2ubuntu2_arm64.deb ...
	Unpacking libkeyutils1:arm64 (1.5.9-9.2ubuntu2) ...
	Selecting previously unselected package libkrb5-3:arm64.
	Preparing to unpack .../17-libkrb5-3_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libkrb5-3:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libgssapi-krb5-2:arm64.
	Preparing to unpack .../18-libgssapi-krb5-2_1.16-2ubuntu0.2_arm64.deb ...
	Unpacking libgssapi-krb5-2:arm64 (1.16-2ubuntu0.2) ...
	Selecting previously unselected package libnuma1:arm64.
	Preparing to unpack .../19-libnuma1_2.0.11-2.1ubuntu0.1_arm64.deb ...
	Unpacking libnuma1:arm64 (2.0.11-2.1ubuntu0.1) ...
	Selecting previously unselected package libpsl5:arm64.
	Preparing to unpack .../20-libpsl5_0.19.1-5build1_arm64.deb ...
	Unpacking libpsl5:arm64 (0.19.1-5build1) ...
	Selecting previously unselected package publicsuffix.
	Preparing to unpack .../21-publicsuffix_20180223.1310-1_all.deb ...
	Unpacking publicsuffix (20180223.1310-1) ...
	Selecting previously unselected package libdevmapper-event1.02.1:arm64.
	Preparing to unpack .../22-libdevmapper-event1.02.1_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking libdevmapper-event1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package liblvm2cmd2.02:arm64.
	Preparing to unpack .../23-liblvm2cmd2.02_2.02.176-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking liblvm2cmd2.02:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package dmeventd.
	Preparing to unpack .../24-dmeventd_2%3a1.02.145-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking dmeventd (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package libroken18-heimdal:arm64.
	Preparing to unpack .../25-libroken18-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libroken18-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libasn1-8-heimdal:arm64.
	Preparing to unpack .../26-libasn1-8-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libasn1-8-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libavahi-common-data:arm64.
	Preparing to unpack .../27-libavahi-common-data_0.7-3.1ubuntu1.3_arm64.deb ...
	Unpacking libavahi-common-data:arm64 (0.7-3.1ubuntu1.3) ...
	Selecting previously unselected package libavahi-common3:arm64.
	Preparing to unpack .../28-libavahi-common3_0.7-3.1ubuntu1.3_arm64.deb ...
	Unpacking libavahi-common3:arm64 (0.7-3.1ubuntu1.3) ...
	Selecting previously unselected package libavahi-client3:arm64.
	Preparing to unpack .../29-libavahi-client3_0.7-3.1ubuntu1.3_arm64.deb ...
	Unpacking libavahi-client3:arm64 (0.7-3.1ubuntu1.3) ...
	Selecting previously unselected package libheimbase1-heimdal:arm64.
	Preparing to unpack .../30-libheimbase1-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libheimbase1-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libhcrypto4-heimdal:arm64.
	Preparing to unpack .../31-libhcrypto4-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libhcrypto4-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libwind0-heimdal:arm64.
	Preparing to unpack .../32-libwind0-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libwind0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libhx509-5-heimdal:arm64.
	Preparing to unpack .../33-libhx509-5-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libhx509-5-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libkrb5-26-heimdal:arm64.
	Preparing to unpack .../34-libkrb5-26-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libkrb5-26-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libheimntlm0-heimdal:arm64.
	Preparing to unpack .../35-libheimntlm0-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libheimntlm0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libgssapi3-heimdal:arm64.
	Preparing to unpack .../36-libgssapi3-heimdal_7.5.0+dfsg-1_arm64.deb ...
	Unpacking libgssapi3-heimdal:arm64 (7.5.0+dfsg-1) ...
	Selecting previously unselected package libsasl2-modules-db:arm64.
	Preparing to unpack .../37-libsasl2-modules-db_2.1.27~101-g0780600+dfsg-3ubuntu2.3_arm64.deb ...
	Unpacking libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Selecting previously unselected package libsasl2-2:arm64.
	Preparing to unpack .../38-libsasl2-2_2.1.27~101-g0780600+dfsg-3ubuntu2.3_arm64.deb ...
	Unpacking libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Selecting previously unselected package libldap-common.
	Preparing to unpack .../39-libldap-common_2.4.45+dfsg-1ubuntu1.10_all.deb ...
	Unpacking libldap-common (2.4.45+dfsg-1ubuntu1.10) ...
	Selecting previously unselected package libldap-2.4-2:arm64.
	Preparing to unpack .../40-libldap-2.4-2_2.4.45+dfsg-1ubuntu1.10_arm64.deb ...
	Unpacking libldap-2.4-2:arm64 (2.4.45+dfsg-1ubuntu1.10) ...
	Selecting previously unselected package libnghttp2-14:arm64.
	Preparing to unpack .../41-libnghttp2-14_1.30.0-1ubuntu1_arm64.deb ...
	Unpacking libnghttp2-14:arm64 (1.30.0-1ubuntu1) ...
	Selecting previously unselected package librtmp1:arm64.
	Preparing to unpack .../42-librtmp1_2.4+20151223.gitfa8646d.1-1_arm64.deb ...
	Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-1) ...
	Selecting previously unselected package libcurl3-gnutls:arm64.
	Preparing to unpack .../43-libcurl3-gnutls_7.58.0-2ubuntu3.13_arm64.deb ...
	Unpacking libcurl3-gnutls:arm64 (7.58.0-2ubuntu3.13) ...
	Selecting previously unselected package liblvm2app2.2:arm64.
	Preparing to unpack .../44-liblvm2app2.2_2.02.176-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking liblvm2app2.2:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Selecting previously unselected package libnl-3-200:arm64.
	Preparing to unpack .../45-libnl-3-200_3.2.29-0ubuntu3_arm64.deb ...
	Unpacking libnl-3-200:arm64 (3.2.29-0ubuntu3) ...
	Selecting previously unselected package libreadline5:arm64.
	Preparing to unpack .../46-libreadline5_5.2+dfsg-3build1_arm64.deb ...
	Unpacking libreadline5:arm64 (5.2+dfsg-3build1) ...
	Selecting previously unselected package libsasl2-modules:arm64.
	Preparing to unpack .../47-libsasl2-modules_2.1.27~101-g0780600+dfsg-3ubuntu2.3_arm64.deb ...
	Unpacking libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Selecting previously unselected package libyajl2:arm64.
	Preparing to unpack .../48-libyajl2_2.1.0-2build1_arm64.deb ...
	Unpacking libyajl2:arm64 (2.1.0-2build1) ...
	Selecting previously unselected package libvirt0:arm64.
	Preparing to unpack .../49-libvirt0_4.0.0-1ubuntu8.19_arm64.deb ...
	Unpacking libvirt0:arm64 (4.0.0-1ubuntu8.19) ...
	Selecting previously unselected package lvm2.
	Preparing to unpack .../50-lvm2_2.02.176-4.1ubuntu3.18.04.3_arm64.deb ...
	Unpacking lvm2 (2.02.176-4.1ubuntu3.18.04.3) ...
	Setting up readline-common (7.0-3) ...
	Setting up libexpat1:arm64 (2.2.5-3ubuntu0.2) ...
	Setting up libicu60:arm64 (60.2-3ubuntu3.1) ...
	Setting up libnghttp2-14:arm64 (1.30.0-1ubuntu1) ...
	Setting up libldap-common (2.4.45+dfsg-1ubuntu1.10) ...
	Setting up libpsl5:arm64 (0.19.1-5build1) ...
	Setting up libnuma1:arm64 (2.0.11-2.1ubuntu0.1) ...
	Setting up libsasl2-modules-db:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Setting up libsasl2-2:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Setting up libroken18-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-1) ...
	Setting up libdevmapper1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up libkrb5support0:arm64 (1.16-2ubuntu0.2) ...
	Setting up libxml2:arm64 (2.9.4+dfsg1-6.1ubuntu1.4) ...
	Setting up libdevmapper-event1.02.1:arm64 (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up libyajl2:arm64 (2.1.0-2build1) ...
	Setting up krb5-locales (1.16-2ubuntu0.2) ...
	Setting up publicsuffix (20180223.1310-1) ...
	Setting up libapparmor1:arm64 (2.12-4ubuntu5.1) ...
	Setting up libssl1.1:arm64 (1.1.1-1ubuntu2.1~18.04.9) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/aarch64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Setting up libheimbase1-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up openssl (1.1.1-1ubuntu2.1~18.04.9) ...
	Setting up libsqlite3-0:arm64 (3.22.0-1ubuntu0.4) ...
	Setting up dmsetup (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up liblvm2app2.2:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Setting up libkeyutils1:arm64 (1.5.9-9.2ubuntu2) ...
	Setting up libreadline5:arm64 (5.2+dfsg-3build1) ...
	Setting up libsasl2-modules:arm64 (2.1.27~101-g0780600+dfsg-3ubuntu2.3) ...
	Setting up libnl-3-200:arm64 (3.2.29-0ubuntu3) ...
	Setting up ca-certificates (20210119~18.04.1) ...
	debconf: unable to initialize frontend: Dialog
	debconf: (TERM is not set, so the dialog frontend is not usable.)
	debconf: falling back to frontend: Readline
	debconf: unable to initialize frontend: Readline
	debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/aarch64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/aarch64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/aarch64-linux-gnu/perl-base) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
	debconf: falling back to frontend: Teletype
	Updating certificates in /etc/ssl/certs...
	129 added, 0 removed; done.
	Setting up libdbus-1-3:arm64 (1.12.2-1ubuntu1.2) ...
	Setting up libavahi-common-data:arm64 (0.7-3.1ubuntu1.3) ...
	Setting up libk5crypto3:arm64 (1.16-2ubuntu0.2) ...
	Setting up libwind0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libasn1-8-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libhcrypto4-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libhx509-5-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libkrb5-3:arm64 (1.16-2ubuntu0.2) ...
	Setting up libavahi-common3:arm64 (0.7-3.1ubuntu1.3) ...
	Setting up libkrb5-26-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up dbus (1.12.2-1ubuntu1.2) ...
	Setting up libheimntlm0-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libgssapi-krb5-2:arm64 (1.16-2ubuntu0.2) ...
	Setting up libavahi-client3:arm64 (0.7-3.1ubuntu1.3) ...
	Setting up libgssapi3-heimdal:arm64 (7.5.0+dfsg-1) ...
	Setting up libldap-2.4-2:arm64 (2.4.45+dfsg-1ubuntu1.10) ...
	Setting up libcurl3-gnutls:arm64 (7.58.0-2ubuntu3.13) ...
	Setting up libvirt0:arm64 (4.0.0-1ubuntu8.19) ...
	Setting up liblvm2cmd2.02:arm64 (2.02.176-4.1ubuntu3.18.04.3) ...
	Setting up dmeventd (2:1.02.145-4.1ubuntu3.18.04.3) ...
	Setting up lvm2 (2.02.176-4.1ubuntu3.18.04.3) ...
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	invoke-rc.d: could not determine current runlevel
	invoke-rc.d: policy-rc.d denied execution of start.
	Processing triggers for libc-bin (2.27-3ubuntu1.4) ...
	Processing triggers for ca-certificates (20210119~18.04.1) ...
	Updating certificates in /etc/ssl/certs...
	0 added, 0 removed; done.
	Running hooks in /etc/ca-certificates/update.d...
	done.

                                                
                                                
-- /stdout --
** stderr ** 
	debconf: delaying package configuration, since apt-utils is not installed
	dpkg: error processing archive /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb (--install):
	 package architecture (aarch64) does not match system (arm64)
	Errors were encountered while processing:
	 /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb

                                                
                                                
** /stderr **
pkg_install_test.go:87: failed to install "/home/jenkins/workspace/Docker_Linux_crio_arm64/out/docker-machine-driver-kvm2_1.22.0-0_arm64.deb" on "ubuntu:18.04": err=exit status 1, exit=1
--- FAIL: TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (10.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (78.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:327: (dbg) Run:  /tmp/minikube-v1.9.1.448716312.exe start -p missing-upgrade-20210708234432-257783 --memory=2200 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:327: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.448716312.exe start -p missing-upgrade-20210708234432-257783 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (45.174114173s)

                                                
                                                
-- stdout --
	! [missing-upgrade-20210708234432-257783] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20210708234432-257783
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7846MB available) ...
	* Deleting "missing-upgrade-20210708234432-257783" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7846MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-20210708234432-257783" running: temporary error created container "missing-upgrade-20210708234432-257783" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210708234432-257783" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-20210708234432-257783" running: temporary error created container "missing-upgrade-20210708234432-257783" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: (dbg) Run:  /tmp/minikube-v1.9.1.448716312.exe start -p missing-upgrade-20210708234432-257783 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:327: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.448716312.exe start -p missing-upgrade-20210708234432-257783 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (17.94012959s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210708234432-257783] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210708234432-257783
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210708234432-257783" ...
	* Restarting existing docker container for "missing-upgrade-20210708234432-257783" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210708234432-257783", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210708234432-257783" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210708234432-257783", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: (dbg) Run:  /tmp/minikube-v1.9.1.448716312.exe start -p missing-upgrade-20210708234432-257783 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:327: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.448716312.exe start -p missing-upgrade-20210708234432-257783 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (6.689513225s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210708234432-257783] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210708234432-257783
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210708234432-257783" ...
	* Restarting existing docker container for "missing-upgrade-20210708234432-257783" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210708234432-257783", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210708234432-257783" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210708234432-257783", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:333: release start failed: exit status 70
panic.go:613: *** TestMissingContainerUpgrade FAILED at 2021-07-08 23:45:45.432353253 +0000 UTC m=+2675.285166218
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect missing-upgrade-20210708234432-257783
helpers_test.go:236: (dbg) docker inspect missing-upgrade-20210708234432-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7fff90ab75905d088a1c094df657ffa3b8c6e2f4d8f2e43feb3198c58519382",
	        "Created": "2021-07-08T23:44:59.574021934Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:45:45.197188932Z",
	            "FinishedAt": "2021-07-08T23:45:45.196761201Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/c7fff90ab75905d088a1c094df657ffa3b8c6e2f4d8f2e43feb3198c58519382/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7fff90ab75905d088a1c094df657ffa3b8c6e2f4d8f2e43feb3198c58519382/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7fff90ab75905d088a1c094df657ffa3b8c6e2f4d8f2e43feb3198c58519382/hosts",
	        "LogPath": "/var/lib/docker/containers/c7fff90ab75905d088a1c094df657ffa3b8c6e2f4d8f2e43feb3198c58519382/c7fff90ab75905d088a1c094df657ffa3b8c6e2f4d8f2e43feb3198c58519382-json.log",
	        "Name": "/missing-upgrade-20210708234432-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20210708234432-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/093c2c8de8b745f4940aa01abf66d0d4f09f639a771fe8b1071b4f0d564f0719-init/diff:/var/lib/docker/overlay2/283bb542aec9deb3b8966a4f27920af8006a3d4f19630ad6bdb3b6945a68edac/diff:/var/lib/docker/overlay2/13a736bd6f06bcd554052a1ca335dd0c257430ff3bbd72221a9d79de4714e14e/diff:/var/lib/docker/overlay2/01554d1fe6cd20d5f6eadc5afa20f64845458886124414387d0fac87121a2923/diff:/var/lib/docker/overlay2/863de27399c21dfdc70837b2e9ca40e651c80daabb03458fc6b986938c641df9/diff:/var/lib/docker/overlay2/75d8a20618ae933bc0012a5fa4691cad054b464b14e81036243bed99b0eea260/diff:/var/lib/docker/overlay2/2326cd3ffdcb3bd4d71694eac01753868b48053d35c4e2adcbc7f0b9a32ecd85/diff:/var/lib/docker/overlay2/28949507e68becf5c73fd2a2279a706cf5aa65344e705fc085fa5ed4cdef8bfc/diff:/var/lib/docker/overlay2/996141a7a7a8a6093ca1d203aa5d967a9e97cfbb304a4dc40be56a6f9dc7f756/diff:/var/lib/docker/overlay2/9b434f771469f7ee48dd6539e52aee435b65760c9066d777bc70eba7292dc6ed/diff:/var/lib/docker/overlay2/58ff1d
79077a5905707835bcd9a41192d08a4301b3d85d0a642c1e3d6ba3d18f/diff:/var/lib/docker/overlay2/faa0bee1de0f8ea6fbcec2285d8a61e0177a9c3c2e1a5458188d402b819089fe/diff:/var/lib/docker/overlay2/e43444d74125d6dca56898ddd688d6549b68914291e01db359e29888f0555844/diff:/var/lib/docker/overlay2/cdfcf7c9b6e0659745d1b29cc86d4a444e1c29a2b9f3d84c2b1de7a86b412e6a/diff:/var/lib/docker/overlay2/bd6d510c59bf9bb452c4c966e9ecf4ba83c35c890beecd3a2e7d4b2474358203/diff:/var/lib/docker/overlay2/74cbc1b58b5a5af3b8f9b790a3fb670a56e4b93a73e2a77dba1315618417f610/diff:/var/lib/docker/overlay2/23ee9207f54a7a6e817be285bb147f9bbb630ed859d9f27262173ae6f6350adf/diff:/var/lib/docker/overlay2/f2d6f98d60301b32758e7f71ca2532b1cc085359389827e6441966881cd73086/diff:/var/lib/docker/overlay2/627829829fe26f0a010eb09b9ed969e30c4bacedb7f15618d9b204e2ac8935cb/diff:/var/lib/docker/overlay2/e715da7ed94d9e84c98c6cada5d2bba3f0f4f2eea66f96d61e82493303f0bf8e/diff:/var/lib/docker/overlay2/06bbc0ae5d80815a51ab73c0306faf46e3efa16cb9038b63f869875eec350be3/diff:/var/lib/d
ocker/overlay2/87d892d94cfc5c25d4f06fa0768bdc929ebe0161f51da7098735e0032eb67af3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/093c2c8de8b745f4940aa01abf66d0d4f09f639a771fe8b1071b4f0d564f0719/merged",
	                "UpperDir": "/var/lib/docker/overlay2/093c2c8de8b745f4940aa01abf66d0d4f09f639a771fe8b1071b4f0d564f0719/diff",
	                "WorkDir": "/var/lib/docker/overlay2/093c2c8de8b745f4940aa01abf66d0d4f09f639a771fe8b1071b4f0d564f0719/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20210708234432-257783",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20210708234432-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20210708234432-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20210708234432-257783",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20210708234432-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f880d5bf3ccb0f390ba5fe3b8cf0b9ba10e03eb410d564f02e2055cfdd870c2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/5f880d5bf3cc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "89849e25a2934a0dd88264e92a4f806e96607977b9d6093242da43092c3d44c0",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210708234432-257783 -n missing-upgrade-20210708234432-257783
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210708234432-257783 -n missing-upgrade-20210708234432-257783: exit status 7 (88.430136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "missing-upgrade-20210708234432-257783" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "missing-upgrade-20210708234432-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-20210708234432-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-20210708234432-257783: (5.051032637s)
--- FAIL: TestMissingContainerUpgrade (78.26s)

                                                
                                    
x
+
TestPause/serial/Pause (5.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210708233938-257783 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-20210708233938-257783 --alsologtostderr -v=5: exit status 80 (2.371213218s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210708233938-257783 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:44:52.874299  383864 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:44:52.874788  383864 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:52.874800  383864 out.go:299] Setting ErrFile to fd 2...
	I0708 23:44:52.874804  383864 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:52.874937  383864 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:44:52.875107  383864 out.go:293] Setting JSON to false
	I0708 23:44:52.875136  383864 mustload.go:65] Loading cluster: pause-20210708233938-257783
	I0708 23:44:52.875919  383864 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:52.916170  383864 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:52.916903  383864 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 host-only-
nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0/minikube-v1.22.0.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool
=true) profile:pause-20210708233938-257783 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)]="(MISSING)"
	I0708 23:44:52.920197  383864 out.go:165] * Pausing node pause-20210708233938-257783 ... 
	I0708 23:44:52.920217  383864 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:52.920480  383864 ssh_runner.go:149] Run: systemctl --version
	I0708 23:44:52.920512  383864 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:52.957103  383864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:53.054887  383864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:53.062968  383864 pause.go:50] kubelet running: true
	I0708 23:44:53.063014  383864 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:44:53.254803  383864 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0708 23:44:53.531104  383864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:53.538942  383864 pause.go:50] kubelet running: true
	I0708 23:44:53.538984  383864 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:44:53.680435  383864 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0708 23:44:54.221075  383864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:54.230004  383864 pause.go:50] kubelet running: true
	I0708 23:44:54.230056  383864 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:44:54.374213  383864 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0708 23:44:55.029972  383864 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:55.038054  383864 pause.go:50] kubelet running: true
	I0708 23:44:55.038101  383864 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:44:55.185181  383864 out.go:165] 
	W0708 23:44:55.185289  383864 out.go:230] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0708 23:44:55.185299  383864 out.go:230] * 
	* 
	W0708 23:44:55.191582  383864 out.go:230] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0708 23:44:55.194089  383864 out.go:165] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-20210708233938-257783 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210708233938-257783
helpers_test.go:236: (dbg) docker inspect pause-20210708233938-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7",
	        "Created": "2021-07-08T23:42:55.939971333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:42:56.671510562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hosts",
	        "LogPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7-json.log",
	        "Name": "/pause-20210708233938-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210708233938-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210708233938-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210708233938-257783",
	                "Source": "/var/lib/docker/volumes/pause-20210708233938-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210708233938-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "name.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3364fc967f3a3a4f088daf2fc73d5bc45f12bb4867ba695dabf0ca91254c0104",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49617"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49616"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49613"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49615"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49614"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3364fc967f3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210708233938-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e0e986f196e",
	                        "pause-20210708233938-257783"
	                    ],
	                    "NetworkID": "7afb1bbd4669bf981affda6e21a0542828c16cc07887274e53996cdbb87c5e05",
	                    "EndpointID": "cf78b06b889a67153f813b6dd94cd8e9e0adb49ff2586b7f7058289d1b323f20",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210708233938-257783 logs -n 25
helpers_test.go:253: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                         | multinode-20210708232645-257783-m03        | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:35:24 UTC | Thu, 08 Jul 2021 23:36:14 UTC |
	|         | multinode-20210708232645-257783-m03        |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | multinode-20210708232645-257783-m03        | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:36:14 UTC | Thu, 08 Jul 2021 23:36:18 UTC |
	|         | multinode-20210708232645-257783-m03        |                                            |         |         |                               |                               |
	| delete  | -p                                         | multinode-20210708232645-257783            | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:36:18 UTC | Thu, 08 Jul 2021 23:36:23 UTC |
	|         | multinode-20210708232645-257783            |                                            |         |         |                               |                               |
	| start   | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:07 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --memory=2048 --driver=docker              |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:52 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:05 UTC | Thu, 08 Jul 2021 23:39:12 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:12 UTC | Thu, 08 Jul 2021 23:39:17 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210708233917-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:32 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | insufficient-storage-20210708233917-257783 |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | kubenet-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:39 UTC |
	|         | flannel-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p false-20210708233939-257783             | false-20210708233939-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:39 UTC | Thu, 08 Jul 2021 23:39:40 UTC |
	| start   | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:40 UTC | Thu, 08 Jul 2021 23:40:42 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:42 UTC | Thu, 08 Jul 2021 23:40:44 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:44 UTC | Thu, 08 Jul 2021 23:41:30 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:30 UTC | Thu, 08 Jul 2021 23:41:33 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:33 UTC | Thu, 08 Jul 2021 23:42:18 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | cert-options-20210708234133-257783         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:19 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:22 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:22 UTC | Thu, 08 Jul 2021 23:43:17 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:17 UTC | Thu, 08 Jul 2021 23:43:20 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:20 UTC | Thu, 08 Jul 2021 23:44:07 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:07 UTC | Thu, 08 Jul 2021 23:44:29 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:29 UTC | Thu, 08 Jul 2021 23:44:32 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:44:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:47 UTC | Thu, 08 Jul 2021 23:44:52 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:44:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:44:47.154451  383102 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:44:47.154571  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154583  383102 out.go:299] Setting ErrFile to fd 2...
	I0708 23:44:47.154587  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154704  383102 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:44:47.154960  383102 out.go:293] Setting JSON to false
	I0708 23:44:47.156021  383102 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8836,"bootTime":1625779051,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:44:47.156093  383102 start.go:121] virtualization:  
	I0708 23:44:47.158605  383102 out.go:165] * [pause-20210708233938-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:44:47.160748  383102 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:44:47.162569  383102 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:47.164384  383102 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:44:47.166094  383102 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:44:47.166892  383102 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:44:47.221034  383102 docker.go:132] docker version: linux-20.10.7
	I0708 23:44:47.221102  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.306208  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.254744355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.306303  383102 docker.go:244] overlay module found
	I0708 23:44:47.309309  383102 out.go:165] * Using the docker driver based on existing profile
	I0708 23:44:47.309327  383102 start.go:278] selected driver: docker
	I0708 23:44:47.309332  383102 start.go:751] validating driver "docker" against &{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.309419  383102 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0708 23:44:47.309784  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.393590  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.342522281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.393925  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:47.393941  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:47.393950  383102 start_flags.go:275] config:
	{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.396046  383102 out.go:165] * Starting control plane node pause-20210708233938-257783 in cluster pause-20210708233938-257783
	I0708 23:44:47.396084  383102 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:44:47.398019  383102 out.go:165] * Pulling base image ...
	I0708 23:44:47.398037  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:47.398068  383102 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:44:47.398080  383102 cache.go:56] Caching tarball of preloaded images
	I0708 23:44:47.398205  383102 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:44:47.398227  383102 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:44:47.398319  383102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/config.json ...
	I0708 23:44:47.398483  383102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:44:47.436290  383102 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:44:47.436316  383102 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:44:47.436330  383102 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:44:47.436359  383102 start.go:313] acquiring machines lock for pause-20210708233938-257783: {Name:mk0dd574f5aab82d7e948dc25f56eae9437435ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:44:47.436434  383102 start.go:317] acquired machines lock for "pause-20210708233938-257783" in 54.777µs
	I0708 23:44:47.436455  383102 start.go:93] Skipping create...Using existing machine configuration
	I0708 23:44:47.436464  383102 fix.go:55] fixHost starting: 
	I0708 23:44:47.436724  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:47.471771  383102 fix.go:108] recreateIfNeeded on pause-20210708233938-257783: state=Running err=<nil>
	W0708 23:44:47.471801  383102 fix.go:134] unexpected machine state, will restart: <nil>
	I0708 23:44:47.474143  383102 out.go:165] * Updating the running docker "pause-20210708233938-257783" container ...
	I0708 23:44:47.474165  383102 machine.go:88] provisioning docker machine ...
	I0708 23:44:47.474179  383102 ubuntu.go:169] provisioning hostname "pause-20210708233938-257783"
	I0708 23:44:47.474233  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.518727  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.518901  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.518913  383102 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210708233938-257783 && echo "pause-20210708233938-257783" | sudo tee /etc/hostname
	I0708 23:44:47.662054  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210708233938-257783
	
	I0708 23:44:47.662122  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.698564  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.698719  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.698745  383102 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210708233938-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210708233938-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210708233938-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:44:47.806503  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:44:47.806520  383102 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:44:47.806546  383102 ubuntu.go:177] setting up certificates
	I0708 23:44:47.806556  383102 provision.go:83] configureAuth start
	I0708 23:44:47.806605  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:47.841582  383102 provision.go:137] copyHostCerts
	I0708 23:44:47.841630  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem, removing ...
	I0708 23:44:47.841642  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem
	I0708 23:44:47.841700  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:44:47.841780  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem, removing ...
	I0708 23:44:47.841793  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem
	I0708 23:44:47.841816  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:44:47.841862  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem, removing ...
	I0708 23:44:47.841871  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem
	I0708 23:44:47.841892  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:44:47.841933  383102 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.pause-20210708233938-257783 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210708233938-257783]
	I0708 23:44:48.952877  383102 provision.go:171] copyRemoteCerts
	I0708 23:44:48.952938  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:44:48.952979  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:48.988956  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.069409  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:44:49.084030  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:44:49.098201  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 23:44:49.112707  383102 provision.go:86] duration metric: configureAuth took 1.306144285s
	I0708 23:44:49.112722  383102 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:44:49.112945  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.147842  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:49.148030  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:49.148050  383102 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:44:49.265435  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:44:49.265449  383102 machine.go:91] provisioned docker machine in 1.791277399s
	I0708 23:44:49.265466  383102 start.go:267] post-start starting for "pause-20210708233938-257783" (driver="docker")
	I0708 23:44:49.265473  383102 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:44:49.265521  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:44:49.265564  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.302440  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.385342  383102 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:44:49.387501  383102 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:44:49.387521  383102 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:44:49.387533  383102 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:44:49.387542  383102 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:44:49.387552  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:44:49.387592  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:44:49.387720  383102 start.go:270] post-start completed in 122.24664ms
	I0708 23:44:49.387753  383102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:44:49.387787  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.422565  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.503288  383102 fix.go:57] fixHost completed within 2.066821667s
	I0708 23:44:49.503310  383102 start.go:80] releasing machines lock for "pause-20210708233938-257783", held for 2.066864546s
	I0708 23:44:49.503369  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:49.537513  383102 ssh_runner.go:149] Run: systemctl --version
	I0708 23:44:49.537553  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.537599  383102 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:44:49.537656  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.578213  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.591758  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.667104  383102 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:44:49.802373  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:44:49.809858  383102 docker.go:153] disabling docker service ...
	I0708 23:44:49.809898  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:44:49.818109  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:44:49.826668  383102 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:44:49.957409  383102 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:44:50.082177  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:44:50.090087  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:44:50.100877  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.109868  383102 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:44:50.109919  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.116503  383102 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:44:50.121833  383102 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:44:50.126949  383102 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:44:50.251265  383102 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:44:50.259385  383102 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:44:50.259425  383102 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:44:50.261926  383102 start.go:411] Will wait 60s for crictl version
	I0708 23:44:50.261961  383102 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:44:50.286962  383102 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:44:50.287041  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.352750  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.423233  383102 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:44:50.423307  383102 cli_runner.go:115] Run: docker network inspect pause-20210708233938-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:44:50.464228  383102 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0708 23:44:50.467264  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:50.467314  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.490940  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.490957  383102 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:44:50.490993  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.512176  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.512192  383102 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:44:50.512245  383102 ssh_runner.go:149] Run: crio config
	I0708 23:44:50.587658  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:50.587677  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:50.587685  383102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:44:50.587790  383102 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210708233938-257783 NodeName:pause-20210708233938-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:44:50.587905  383102 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210708233938-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:44:50.587994  383102 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210708233938-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:44:50.588044  383102 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:44:50.593749  383102 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:44:50.593819  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:44:50.599162  383102 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0708 23:44:50.609681  383102 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:44:50.620170  383102 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1884 bytes)
	I0708 23:44:50.630479  383102 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:44:50.632974  383102 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783 for IP: 192.168.58.2
	I0708 23:44:50.633021  383102 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:44:50.633039  383102 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:44:50.633098  383102 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.key
	I0708 23:44:50.633117  383102 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key.cee25041
	I0708 23:44:50.633142  383102 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key
	I0708 23:44:50.633227  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem (1338 bytes)
	W0708 23:44:50.633268  383102 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783_empty.pem, impossibly tiny 0 bytes
	I0708 23:44:50.633280  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:44:50.633305  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:44:50.633332  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:44:50.633356  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:44:50.634343  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:44:50.648438  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 23:44:50.662480  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:44:50.677256  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:44:50.691568  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:44:50.705113  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:44:50.718728  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:44:50.733001  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:44:50.748832  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:44:50.762662  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem --> /usr/share/ca-certificates/257783.pem (1338 bytes)
	I0708 23:44:50.776552  383102 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:44:50.786598  383102 ssh_runner.go:149] Run: openssl version
	I0708 23:44:50.790834  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:44:50.796632  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799083  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799118  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.803062  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:44:50.808543  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257783.pem && ln -fs /usr/share/ca-certificates/257783.pem /etc/ssl/certs/257783.pem"
	I0708 23:44:50.814370  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816803  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul  8 23:18 /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816856  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.820832  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257783.pem /etc/ssl/certs/51391683.0"
	I0708 23:44:50.826095  383102 kubeadm.go:390] StartCluster: {Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:50.826162  383102 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:44:50.826221  383102 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:44:50.849897  383102 cri.go:76] found id: "b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7"
	I0708 23:44:50.849919  383102 cri.go:76] found id: "7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a"
	I0708 23:44:50.849943  383102 cri.go:76] found id: "aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e"
	I0708 23:44:50.849950  383102 cri.go:76] found id: "0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef"
	I0708 23:44:50.849954  383102 cri.go:76] found id: "66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e"
	I0708 23:44:50.849963  383102 cri.go:76] found id: "76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41"
	I0708 23:44:50.849967  383102 cri.go:76] found id: "f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c"
	I0708 23:44:50.849975  383102 cri.go:76] found id: ""
	I0708 23:44:50.850009  383102 ssh_runner.go:149] Run: sudo runc list -f json
	I0708 23:44:50.888444  383102 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","pid":1704,"status":"running","bundle":"/run/containers/storage/overlay-containers/0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef/userdata","rootfs":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","created":"2021-07-08T23:43:28.796258779Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c9d3bb9","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c9d3bb9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.409347635Z","io.kubernetes.cri-o.Image":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.2","io.kubernetes.cri-o.ImageRef":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/kube-controller-mana
ger/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_pa
th\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/containers/kube-controller-manager/b3e49874\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exe
c\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","pid":1454,"status":"running","bundle":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata","rootfs":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","created":"2021-07-08T23:43:27
.761173536Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.58.2:8443\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463729331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"48a917795140826e0af6da63b039926b\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.500118923Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9
e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"48a917795140826e0af6da63b039926b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210708233938-257783\",\"uid\":\"48a917795140826e0af6da63b039926b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","io.kubernete
s.cri-o.Name":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kub
e-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","pid":1696,"status":"running","bundle":"/run/containers/storage/overlay-containers/66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e/userdata","rootfs":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","created":"2021-07-08T23:43:28.74704463Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a5e28f4f","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationM
essagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a5e28f4f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.458654206Z","io.kubernetes.cri-o.Image":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.2","io.kubernetes.cri-o.ImageRef":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io
.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin"
:"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/containers/kube-scheduler/6f04df63\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStop
USec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41/userdata","rootfs":"/var/lib/containers/storage/overlay/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","created":"2021-07-08T23:43:28.33827029Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"364fba0d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"364fba0d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",
\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.136323454Z","io.kubernetes.cri-o.Image":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overla
y/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/containe
rs/etcd/486736f1\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","pid":2476,"status":"running","bundle":"/run/containers/storage/overlay-co
ntainers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata","rootfs":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc79466722b3777db9c24dd3c63a849026ee706e/merged","created":"2021-07-08T23:43:58.920322142Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.220124726Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.842364249Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":
"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-589hd","io.kubernetes.cri-o.Labels":"{\"app\":\"kindnet\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"tier\":\"node\",\"k8s-app\":\"kindnet\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-589hd\",\"uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc7946672
2b3777db9c24dd3c63a849026ee706e/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/shm","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"55f424f0-d7a4-418f-8572-27041384f3ba","k8s-app":"kindnet","ku
bernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a/userdata","rootfs":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","created":"2021-07-08T23:43:59.140412019Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"73cb1b1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"73cb1b1\",\"io.kub
ernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.029318377Z","io.kubernetes.cri-o.Image":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.2","io.kubernetes.cri-o.ImageRef":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e
2c-5d4d-4e26-9d87-bfe3d4715985/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"con
tainer_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/containers/kube-proxy/343cc99a\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~projected/kube-api-access-2vk7z\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","kubernetes.io/config.se
en":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","pid":1533,"status":"running","bundle":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata","rootfs":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","created":"2021-07-08T23:43:27.9820761Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"2349193ca86d9558bc895849265d2bbd\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.58.2:2379\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463758229Z\",\"kubernetes.io/config.source\":\"file\"}","io.kub
ernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.685254972Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.c
ontainer.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210708233938-257783\",\"uid\":\"2349193ca86d9558bc895849265d2bbd\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeH
andler":"","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","pid":1526,"status":"running","bundle":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd0
02fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata","rootfs":"/var/lib/containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","created":"2021-07-08T23:43:28.02245442Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"636f853856e082c029b85fb89a036300\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463757039Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.680724389Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true"
,"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210708233938-257783\",\"uid\":\"636f853856e082c029b85fb89a036300\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/
containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","pid":2536,"status":"running","bundle":"/run/containers/storage/overlay-containers/aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e/userdata","rootfs":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","created":"2021-07-08T23:43:59.094903496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"42880ebe","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"42880ebe\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.000934542Z","io.kubernetes.cri-o.Image":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"io.kubernetes.pod.namespace\":\"kube-system\"
,\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":
"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/containers/kindnet-cni/63efdea9\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/volumes/kubernetes.io~projected/kube-api-access-vxfqs\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"55f424f0-d7
a4-418f-8572-27041384f3ba","kubernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","pid":3117,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7/userdata","rootfs":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","created":"2021-07-08T23:44:44.929527419Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3ba99b8a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3ba99b8a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:44:44.869981068Z","io.kubernetes.cr
i-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ResolvPath":"/run/co
ntainers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/containers/coredns/ebcb451b\",\"readonly\":false},{\"contai
ner_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~projected/kube-api-access-wjk4b\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","pid":3088,"status":"running","bundle":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata","rootfs":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged",
"created":"2021-07-08T23:44:44.819414214Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:44:44.378304571Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethad721594\",\"mac\":\"fa:ff:ad:c6:25:66\"},{\"name\":\"eth0\",\"mac\":\"22:75:6a:ff:8f:5c\",\"sandbox\":\"/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T2
3:44:44.69371705Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-mnwpk\",\"uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"namespace\":\
"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","pid":1483,"status":"running","bundle":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata","rootfs":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","created":"2021-07-08T23:43:27.88584733Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463755710Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes
.io/config.hash\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.588501479Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\",\"io.kubernetes.container.name\":\"
POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210708233938-257783\",\"uid\":\"c0a79d1d801cddeaa32444663181957f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"tr
ue","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"edb6f1460db485be501f94018d5caf7a
576fdd2e67b51c15322cf821191a0ebb","pid":2500,"status":"running","bundle":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata","rootfs":"/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","created":"2021-07-08T23:43:58.96207639Z","annotations":{"controller-revision-hash":"6896ccdc5","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.246007990Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.878275037Z","io.kubernetes.cri-o.HostName":"pause-202107
08233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-rb2ws","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"6896ccdc5\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e2c-5d4d-4e26-9d87-bfe3d4715985/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-rb2ws\",\"uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"
/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/shm","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kuberne
tes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","pid":1608,"status":"running","bundle":"/run/containers/storage/overlay-containers/f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c/userdata","rootfs":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","created":"2021-07-08T23:43:28.292409803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"44b38584","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o
.Annotations":"{\"io.kubernetes.container.hash\":\"44b38584\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.165591981Z","io.kubernetes.cri-o.Image":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.2","io.kubernetes.cri-o.ImageRef":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"48a917795140826e0
af6da63b039926b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":
"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/containers/kube-apiserver/141310e0\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.nam
espace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0708 23:44:50.889436  383102 cri.go:113] list returned 14 containers
	I0708 23:44:50.889463  383102 cri.go:116] container: {ID:0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef Status:running}
	I0708 23:44:50.889494  383102 cri.go:122] skipping {0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef running}: state = "running", want "paused"
	I0708 23:44:50.889513  383102 cri.go:116] container: {ID:153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 Status:running}
	I0708 23:44:50.889538  383102 cri.go:118] skipping 153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 - not in ps
	I0708 23:44:50.889556  383102 cri.go:116] container: {ID:66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e Status:running}
	I0708 23:44:50.889571  383102 cri.go:122] skipping {66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e running}: state = "running", want "paused"
	I0708 23:44:50.889587  383102 cri.go:116] container: {ID:76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 Status:running}
	I0708 23:44:50.889601  383102 cri.go:122] skipping {76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 running}: state = "running", want "paused"
	I0708 23:44:50.889626  383102 cri.go:116] container: {ID:79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 Status:running}
	I0708 23:44:50.889644  383102 cri.go:118] skipping 79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 - not in ps
	I0708 23:44:50.889657  383102 cri.go:116] container: {ID:7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a Status:running}
	I0708 23:44:50.889671  383102 cri.go:122] skipping {7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a running}: state = "running", want "paused"
	I0708 23:44:50.889687  383102 cri.go:116] container: {ID:7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 Status:running}
	I0708 23:44:50.889711  383102 cri.go:118] skipping 7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 - not in ps
	I0708 23:44:50.889726  383102 cri.go:116] container: {ID:98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 Status:running}
	I0708 23:44:50.889739  383102 cri.go:118] skipping 98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 - not in ps
	I0708 23:44:50.889751  383102 cri.go:116] container: {ID:aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e Status:running}
	I0708 23:44:50.889763  383102 cri.go:122] skipping {aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e running}: state = "running", want "paused"
	I0708 23:44:50.889786  383102 cri.go:116] container: {ID:b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 Status:running}
	I0708 23:44:50.889802  383102 cri.go:122] skipping {b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 running}: state = "running", want "paused"
	I0708 23:44:50.889816  383102 cri.go:116] container: {ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f Status:running}
	I0708 23:44:50.889831  383102 cri.go:118] skipping ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f - not in ps
	I0708 23:44:50.889844  383102 cri.go:116] container: {ID:ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe Status:running}
	I0708 23:44:50.889868  383102 cri.go:118] skipping ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe - not in ps
	I0708 23:44:50.889884  383102 cri.go:116] container: {ID:edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb Status:running}
	I0708 23:44:50.889899  383102 cri.go:118] skipping edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb - not in ps
	I0708 23:44:50.889910  383102 cri.go:116] container: {ID:f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c Status:running}
	I0708 23:44:50.889924  383102 cri.go:122] skipping {f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c running}: state = "running", want "paused"
	I0708 23:44:50.889976  383102 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:44:50.896457  383102 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0708 23:44:50.896471  383102 kubeadm.go:600] restartCluster start
	I0708 23:44:50.896504  383102 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0708 23:44:50.901607  383102 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 23:44:50.902345  383102 kubeconfig.go:93] found "pause-20210708233938-257783" server: "https://192.168.58.2:8443"
	I0708 23:44:50.902810  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.904266  383102 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 23:44:50.910038  383102 api_server.go:164] Checking apiserver status ...
	I0708 23:44:50.910093  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:50.921551  383102 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	I0708 23:44:50.927266  383102 api_server.go:180] apiserver freezer: "11:freezer:/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope"
	I0708 23:44:50.927324  383102 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope/freezer.state
	I0708 23:44:50.932380  383102 api_server.go:202] freezer state: "THAWED"
	I0708 23:44:50.932400  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:50.940647  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:50.968340  383102 system_pods.go:86] 7 kube-system pods found
	I0708 23:44:50.968365  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:50.968372  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:50.968381  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:50.968389  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:50.968394  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:50.968404  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:50.968409  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:50.969071  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:50.969091  383102 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0708 23:44:50.969100  383102 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0708 23:44:50.969105  383102 kubeadm.go:604] restartCluster took 72.629672ms
	I0708 23:44:50.969114  383102 kubeadm.go:392] StartCluster complete in 143.022344ms
	I0708 23:44:50.969124  383102 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.969188  383102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:50.969783  383102 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.970369  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.973359  383102 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210708233938-257783" rescaled to 1
	I0708 23:44:50.973409  383102 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:44:50.977036  383102 out.go:165] * Verifying Kubernetes components...
	I0708 23:44:50.977080  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:50.973644  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:44:50.973655  383102 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0708 23:44:50.977189  383102 addons.go:59] Setting storage-provisioner=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977229  383102 addons.go:135] Setting addon storage-provisioner=true in "pause-20210708233938-257783"
	W0708 23:44:50.977246  383102 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:44:50.977293  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:50.977346  383102 addons.go:59] Setting default-storageclass=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977366  383102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210708233938-257783"
	I0708 23:44:50.977642  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:50.977846  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.040750  383102 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:44:51.040845  383102 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.040854  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:44:51.040902  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.059995  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:51.063879  383102 addons.go:135] Setting addon default-storageclass=true in "pause-20210708233938-257783"
	W0708 23:44:51.063911  383102 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:44:51.063955  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:51.064454  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.120151  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.133089  383102 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0708 23:44:51.133129  383102 node_ready.go:35] waiting up to 6m0s for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144796  383102 node_ready.go:49] node "pause-20210708233938-257783" has status "Ready":"True"
	I0708 23:44:51.144810  383102 node_ready.go:38] duration metric: took 11.663188ms waiting for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144817  383102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.151821  383102 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.151836  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:44:51.151881  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.162008  383102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178412  383102 pod_ready.go:92] pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.178425  383102 pod_ready.go:81] duration metric: took 16.393726ms waiting for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178434  383102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182215  383102 pod_ready.go:92] pod "etcd-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.182231  383102 pod_ready.go:81] duration metric: took 3.790081ms waiting for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182242  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185941  383102 pod_ready.go:92] pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.185957  383102 pod_ready.go:81] duration metric: took 3.703058ms waiting for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185966  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193311  383102 pod_ready.go:92] pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.193326  383102 pod_ready.go:81] duration metric: took 7.350387ms waiting for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193335  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.199623  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.228409  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.289804  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.544987  383102 pod_ready.go:92] pod "kube-proxy-rb2ws" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.545034  383102 pod_ready.go:81] duration metric: took 351.691462ms waiting for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.545056  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.611304  383102 out.go:165] * Enabled addons: storage-provisioner, default-storageclass
	I0708 23:44:51.611327  383102 addons.go:344] enableAddons completed in 637.673923ms
	I0708 23:44:51.944191  383102 pod_ready.go:92] pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.944240  383102 pod_ready.go:81] duration metric: took 399.15943ms waiting for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.944260  383102 pod_ready.go:38] duration metric: took 799.430802ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.944284  383102 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:44:51.944353  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:51.962521  383102 api_server.go:70] duration metric: took 989.086682ms to wait for apiserver process to appear ...
	I0708 23:44:51.962540  383102 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:44:51.962549  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:51.976017  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:51.976872  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:51.976889  383102 api_server.go:129] duration metric: took 14.342835ms to wait for apiserver health ...
	I0708 23:44:51.976896  383102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:44:52.147101  383102 system_pods.go:59] 8 kube-system pods found
	I0708 23:44:52.147126  383102 system_pods.go:61] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.147132  383102 system_pods.go:61] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.147156  383102 system_pods.go:61] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.147170  383102 system_pods.go:61] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.147175  383102 system_pods.go:61] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.147180  383102 system_pods.go:61] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.147188  383102 system_pods.go:61] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.147196  383102 system_pods.go:61] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.147205  383102 system_pods.go:74] duration metric: took 170.300522ms to wait for pod list to return data ...
	I0708 23:44:52.147214  383102 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:44:52.344080  383102 default_sa.go:45] found service account: "default"
	I0708 23:44:52.344097  383102 default_sa.go:55] duration metric: took 196.867452ms for default service account to be created ...
	I0708 23:44:52.344104  383102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:44:52.546575  383102 system_pods.go:86] 8 kube-system pods found
	I0708 23:44:52.546597  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.546603  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.546608  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.546614  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.546619  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.546624  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.546629  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.546638  383102 system_pods.go:89] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.546644  383102 system_pods.go:126] duration metric: took 202.535502ms to wait for k8s-apps to be running ...
	I0708 23:44:52.546651  383102 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:44:52.546691  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:52.554858  383102 system_svc.go:56] duration metric: took 8.204667ms WaitForService to wait for kubelet.
	I0708 23:44:52.554876  383102 kubeadm.go:547] duration metric: took 1.581445531s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:44:52.554910  383102 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:44:52.744446  383102 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:44:52.744473  383102 node_conditions.go:123] node cpu capacity is 2
	I0708 23:44:52.744486  383102 node_conditions.go:105] duration metric: took 189.57062ms to run NodePressure ...
	I0708 23:44:52.744495  383102 start.go:225] waiting for startup goroutines ...
	I0708 23:44:52.795296  383102 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:44:52.798688  383102 out.go:165] * Done! kubectl is now configured to use "pause-20210708233938-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:44:56 UTC. --
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752367414Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-mnwpk Namespace:kube-system ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f NetNS:/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752537333Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851068596Z" level=info msg="Ran pod sandbox ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f with infra container: kube-system/coredns-558bd4d5db-mnwpk/POD" id=c9132cb2-089f-4563-8891-94bd70e68b31 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851819090Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.852397777Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855119319Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855626237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.856387143Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870099418Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870133896Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/group: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944318612Z" level=info msg="Created container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944919240Z" level=info msg="Starting container: b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.955103274Z" level=info msg="Started container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:51 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:51.912211778Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.048464048Z" level=info msg="Ran pod sandbox 049e6b5335b3d37bd7b1f71f526dfb38a2146de747c5333e44dd562b58da320c with infra container: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049231829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049808794Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.050512018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051005283Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051652721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065018823Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065122749Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/group: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.130702118Z" level=info msg="Created container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.131445227Z" level=info msg="Starting container: ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.141586857Z" level=info msg="Started container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	ebc191d78d332       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 seconds ago        Running             storage-provisioner       0                   049e6b5335b3d
	b7d6404120fcb       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8   11 seconds ago       Running             coredns                   0                   ded1c1360c407
	7ca432c9b0953       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105   56 seconds ago       Running             kube-proxy                0                   edb6f1460db48
	aa26d8524150c       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301   56 seconds ago       Running             kindnet-cni               0                   79814c347cb14
	0cb308b9b448f       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630   About a minute ago   Running             kube-controller-manager   0                   ebf106620bd16
	66d5fee706a3d       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4   About a minute ago   Running             kube-scheduler            0                   98331c8576b70
	76999b0177398       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28   About a minute ago   Running             etcd                      0                   7df3a2be1b33d
	f275fc53ae00f       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0   About a minute ago   Running             kube-apiserver            0                   153d3d24ac6ae
	
	* 
	* ==> coredns [b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210708233938-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210708233938-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=pause-20210708233938-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_43_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210708233938-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:44:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210708233938-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                06c382d0-5723-4c28-97d9-2bf95fc86b49
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-mnwpk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     58s
	  kube-system                 etcd-pause-20210708233938-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-589hd                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      58s
	  kube-system                 kube-apiserver-pause-20210708233938-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-pause-20210708233938-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-rb2ws                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-20210708233938-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  89s (x8 over 90s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s (x7 over 90s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s (x7 over 90s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 57s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                17s                kubelet     Node pause-20210708233938-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000671] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000514] FS-Cache: N-cookie c=00000000e6b84f6b [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=0000000052778918 n=000000009967b9dc
	[  +0.000663] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +0.001810] FS-Cache: Duplicate cookie detected
	[  +0.000530] FS-Cache: O-cookie c=0000000057c7fc1d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=0000000052778918 n=00000000efae32c9
	[  +0.000673] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=00000000f56d3f5d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000863] FS-Cache: N-cookie d=0000000052778918 n=00000000e997ef03
	[  +0.000702] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +1.187985] FS-Cache: Duplicate cookie detected
	[  +0.000541] FS-Cache: O-cookie c=000000000ea7a21c [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000903] FS-Cache: O-cookie d=0000000052778918 n=00000000f7f72a4b
	[  +0.000697] FS-Cache: O-key=[8] '76e60b0000000000'
	[  +0.000532] FS-Cache: N-cookie c=00000000dc14d28d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=0000000052778918 n=00000000fd1ba8e6
	[  +0.000719] FS-Cache: N-key=[8] '76e60b0000000000'
	[  +0.299966] FS-Cache: Duplicate cookie detected
	[  +0.000563] FS-Cache: O-cookie c=00000000b39eb93d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000913] FS-Cache: O-cookie d=0000000052778918 n=00000000654c5f24
	[  +0.000696] FS-Cache: O-key=[8] '79e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=000000004dd4c5bf [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=0000000052778918 n=000000008dfb704a
	[  +0.000684] FS-Cache: N-key=[8] '79e60b0000000000'
	
	* 
	* ==> etcd [76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41] <==
	* 2021-07-08 23:43:29.374785 I | etcdserver: setting up the initial cluster version to 3.4
	2021-07-08 23:43:29.407011 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-07-08 23:43:29.407103 I | etcdserver/api: enabled capabilities for version 3.4
	2021-07-08 23:43:29.407157 I | etcdserver: published {Name:pause-20210708233938-257783 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-07-08 23:43:29.407465 I | embed: ready to serve client requests
	2021-07-08 23:43:29.415509 I | embed: serving client requests on 127.0.0.1:2379
	2021-07-08 23:43:29.423096 I | embed: ready to serve client requests
	2021-07-08 23:43:29.424420 I | embed: serving client requests on 192.168.58.2:2379
	2021-07-08 23:43:38.896326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:39.824755 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:heapster\" " with result "range_response_count:0 size:4" took too long (132.781472ms) to execute
	2021-07-08 23:43:40.062517 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (126.484476ms) to execute
	2021-07-08 23:43:40.062723 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:node-bootstrapper\" " with result "range_response_count:0 size:4" took too long (157.087895ms) to execute
	2021-07-08 23:43:41.406099 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:kube-scheduler\" " with result "range_response_count:0 size:5" took too long (106.875165ms) to execute
	2021-07-08 23:43:41.406344 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210708233938-257783\" " with result "range_response_count:1 size:5706" took too long (100.988866ms) to execute
	2021-07-08 23:43:41.800497 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler\" " with result "range_response_count:0 size:5" took too long (104.221848ms) to execute
	2021-07-08 23:43:42.415075 W | etcdserver: read-only range request "key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (113.749552ms) to execute
	2021-07-08 23:43:42.790083 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (139.951306ms) to execute
	2021-07-08 23:43:42.790797 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (104.526083ms) to execute
	2021-07-08 23:43:55.482711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:58.854149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:08.855081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:18.853976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:28.854207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:38.854850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:48.854356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:44:56 up  2:27,  0 users,  load average: 4.09, 2.92, 1.91
	Linux pause-20210708233938-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c] <==
	* I0708 23:43:38.659980       1 cache.go:39] Caches are synced for autoregister controller
	I0708 23:43:38.660019       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0708 23:43:38.765817       1 controller.go:611] quota admission added evaluator for: namespaces
	I0708 23:43:39.399647       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0708 23:43:39.399669       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 23:43:39.413314       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0708 23:43:39.428902       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0708 23:43:39.428920       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0708 23:43:42.417829       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 23:43:42.615824       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0708 23:43:42.951365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0708 23:43:42.952308       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 23:43:42.961082       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 23:43:44.101667       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 23:43:44.674905       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 23:43:44.719032       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 23:43:48.275800       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 23:43:58.042264       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0708 23:43:58.285534       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0708 23:44:04.479561       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:04.479599       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:04.479606       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:44:35.434880       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:35.434919       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:35.434927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef] <==
	* I0708 23:43:57.880761       1 shared_informer.go:247] Caches are synced for HPA 
	I0708 23:43:57.880837       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0708 23:43:57.907676       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0708 23:43:57.908756       1 shared_informer.go:247] Caches are synced for endpoint 
	I0708 23:43:57.952724       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:57.956872       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:58.003743       1 shared_informer.go:247] Caches are synced for deployment 
	I0708 23:43:58.061826       1 shared_informer.go:247] Caches are synced for disruption 
	I0708 23:43:58.061841       1 disruption.go:371] Sending events to api server.
	I0708 23:43:58.113116       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0708 23:43:58.121312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.138517       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.160698       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-589hd"
	I0708 23:43:58.238962       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb2ws"
	I0708 23:43:58.288443       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	E0708 23:43:58.326235       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"85756639-7788-414f-aae2-a95c8ac59acd", ResourceVersion:"309", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761384625, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d528a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000d528b8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001394920), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d52900), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394940)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394980)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014b4240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f18168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a56700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400135e8f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f181b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0708 23:43:58.358158       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-xtvks"
	I0708 23:43:58.384252       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-mnwpk"
	I0708 23:43:58.532367       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551775       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551796       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0708 23:43:58.636207       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0708 23:43:58.654856       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-xtvks"
	I0708 23:44:42.867632       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a] <==
	* I0708 23:43:59.522352       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0708 23:43:59.522418       1 server_others.go:140] Detected node IP 192.168.58.2
	W0708 23:43:59.522436       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:43:59.592863       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:43:59.592891       1 server_others.go:212] Using iptables Proxier.
	I0708 23:43:59.592900       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:43:59.592910       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:43:59.593168       1 server.go:643] Version: v1.21.2
	I0708 23:43:59.593489       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:43:59.593530       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:43:59.594089       1 config.go:315] Starting service config controller
	I0708 23:43:59.594140       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:43:59.594778       1 config.go:224] Starting endpoint slice config controller
	I0708 23:43:59.594818       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:43:59.596985       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0708 23:43:59.598797       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:43:59.695058       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0708 23:43:59.695065       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e] <==
	* E0708 23:43:38.663259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:43:38.663881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:38.663980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:38.664026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:38.664104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:38.664153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:38.667689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.506225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:39.684692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:39.707077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:39.715815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:43:39.739475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:39.927791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.950708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:40.026534       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:43:40.052611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.106259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.125654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.138747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:40.200954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:43:40.246523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 23:43:42.914398       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:44:56 UTC. --
	Jul 08 23:43:58 pause-20210708233938-257783 kubelet[2084]: E0708 23:43:58.968952    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:03 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:03.910585    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:08 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:08.911690    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:09 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:09.035899    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:13 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:13.913137    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:18 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:18.914320    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:19 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:19.141178    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:23 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:23.915497    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:28 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:28.916031    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:29 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:29.195302    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:39 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:39.262592    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.378521    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439076    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd8ce294-9dba-4d2e-8793-cc0862414323-config-volume\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439122    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk4b\" (UniqueName: \"kubernetes.io/projected/cd8ce294-9dba-4d2e-8793-cc0862414323-kube-api-access-wjk4b\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:49.318435    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:49.796926    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.049924    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.459815    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.106860    2084 container.go:586] Failed to update stats for container "/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7": /sys/fs/cgroup/cpuset/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.127776    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.610872    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679061    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndmf\" (UniqueName: \"kubernetes.io/projected/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-kube-api-access-6ndmf\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679129    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-tmp\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:52 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:52.247269    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:53 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:53.995237    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	
	* 
	* ==> storage-provisioner [ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf] <==
	* I0708 23:44:52.156408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:44:52.170055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:44:52.170092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:44:52.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:44:52.181466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	I0708 23:44:52.181651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"812885a7-6ecb-4200-9882-e4b3a6fd0939", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409 became leader
	I0708 23:44:52.282548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210708233938-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context pause-20210708233938-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context pause-20210708233938-257783 describe pod : exit status 1 (56.982926ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context pause-20210708233938-257783 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210708233938-257783
helpers_test.go:236: (dbg) docker inspect pause-20210708233938-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7",
	        "Created": "2021-07-08T23:42:55.939971333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:42:56.671510562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hosts",
	        "LogPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7-json.log",
	        "Name": "/pause-20210708233938-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210708233938-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210708233938-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-20210708233938-257783",
	                "Source": "/var/lib/docker/volumes/pause-20210708233938-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210708233938-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "name.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3364fc967f3a3a4f088daf2fc73d5bc45f12bb4867ba695dabf0ca91254c0104",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49617"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49616"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49613"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49615"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49614"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3364fc967f3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210708233938-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e0e986f196e",
	                        "pause-20210708233938-257783"
	                    ],
	                    "NetworkID": "7afb1bbd4669bf981affda6e21a0542828c16cc07887274e53996cdbb87c5e05",
	                    "EndpointID": "cf78b06b889a67153f813b6dd94cd8e9e0adb49ff2586b7f7058289d1b323f20",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210708233938-257783 logs -n 25
helpers_test.go:253: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                         | multinode-20210708232645-257783-m03        | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:36:14 UTC | Thu, 08 Jul 2021 23:36:18 UTC |
	|         | multinode-20210708232645-257783-m03        |                                            |         |         |                               |                               |
	| delete  | -p                                         | multinode-20210708232645-257783            | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:36:18 UTC | Thu, 08 Jul 2021 23:36:23 UTC |
	|         | multinode-20210708232645-257783            |                                            |         |         |                               |                               |
	| start   | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:07 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --memory=2048 --driver=docker              |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:52 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:05 UTC | Thu, 08 Jul 2021 23:39:12 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:12 UTC | Thu, 08 Jul 2021 23:39:17 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210708233917-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:32 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | insufficient-storage-20210708233917-257783 |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | kubenet-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:39 UTC |
	|         | flannel-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p false-20210708233939-257783             | false-20210708233939-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:39 UTC | Thu, 08 Jul 2021 23:39:40 UTC |
	| start   | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:40 UTC | Thu, 08 Jul 2021 23:40:42 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:42 UTC | Thu, 08 Jul 2021 23:40:44 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:44 UTC | Thu, 08 Jul 2021 23:41:30 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:30 UTC | Thu, 08 Jul 2021 23:41:33 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:33 UTC | Thu, 08 Jul 2021 23:42:18 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | cert-options-20210708234133-257783         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:19 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:22 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:22 UTC | Thu, 08 Jul 2021 23:43:17 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:17 UTC | Thu, 08 Jul 2021 23:43:20 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:20 UTC | Thu, 08 Jul 2021 23:44:07 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:07 UTC | Thu, 08 Jul 2021 23:44:29 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:29 UTC | Thu, 08 Jul 2021 23:44:32 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:44:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:47 UTC | Thu, 08 Jul 2021 23:44:52 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:55 UTC | Thu, 08 Jul 2021 23:44:56 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:44:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:44:47.154451  383102 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:44:47.154571  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154583  383102 out.go:299] Setting ErrFile to fd 2...
	I0708 23:44:47.154587  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154704  383102 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:44:47.154960  383102 out.go:293] Setting JSON to false
	I0708 23:44:47.156021  383102 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8836,"bootTime":1625779051,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:44:47.156093  383102 start.go:121] virtualization:  
	I0708 23:44:47.158605  383102 out.go:165] * [pause-20210708233938-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:44:47.160748  383102 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:44:47.162569  383102 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:47.164384  383102 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:44:47.166094  383102 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:44:47.166892  383102 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:44:47.221034  383102 docker.go:132] docker version: linux-20.10.7
	I0708 23:44:47.221102  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.306208  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.254744355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.306303  383102 docker.go:244] overlay module found
	I0708 23:44:47.309309  383102 out.go:165] * Using the docker driver based on existing profile
	I0708 23:44:47.309327  383102 start.go:278] selected driver: docker
	I0708 23:44:47.309332  383102 start.go:751] validating driver "docker" against &{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.309419  383102 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0708 23:44:47.309784  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.393590  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.342522281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.393925  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:47.393941  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:47.393950  383102 start_flags.go:275] config:
	{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.396046  383102 out.go:165] * Starting control plane node pause-20210708233938-257783 in cluster pause-20210708233938-257783
	I0708 23:44:47.396084  383102 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:44:47.398019  383102 out.go:165] * Pulling base image ...
	I0708 23:44:47.398037  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:47.398068  383102 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:44:47.398080  383102 cache.go:56] Caching tarball of preloaded images
	I0708 23:44:47.398205  383102 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:44:47.398227  383102 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:44:47.398319  383102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/config.json ...
	I0708 23:44:47.398483  383102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:44:47.436290  383102 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:44:47.436316  383102 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:44:47.436330  383102 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:44:47.436359  383102 start.go:313] acquiring machines lock for pause-20210708233938-257783: {Name:mk0dd574f5aab82d7e948dc25f56eae9437435ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:44:47.436434  383102 start.go:317] acquired machines lock for "pause-20210708233938-257783" in 54.777µs
	I0708 23:44:47.436455  383102 start.go:93] Skipping create...Using existing machine configuration
	I0708 23:44:47.436464  383102 fix.go:55] fixHost starting: 
	I0708 23:44:47.436724  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:47.471771  383102 fix.go:108] recreateIfNeeded on pause-20210708233938-257783: state=Running err=<nil>
	W0708 23:44:47.471801  383102 fix.go:134] unexpected machine state, will restart: <nil>
	I0708 23:44:47.474143  383102 out.go:165] * Updating the running docker "pause-20210708233938-257783" container ...
	I0708 23:44:47.474165  383102 machine.go:88] provisioning docker machine ...
	I0708 23:44:47.474179  383102 ubuntu.go:169] provisioning hostname "pause-20210708233938-257783"
	I0708 23:44:47.474233  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.518727  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.518901  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.518913  383102 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210708233938-257783 && echo "pause-20210708233938-257783" | sudo tee /etc/hostname
	I0708 23:44:47.662054  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210708233938-257783
	
	I0708 23:44:47.662122  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.698564  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.698719  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.698745  383102 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210708233938-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210708233938-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210708233938-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:44:47.806503  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:44:47.806520  383102 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:44:47.806546  383102 ubuntu.go:177] setting up certificates
	I0708 23:44:47.806556  383102 provision.go:83] configureAuth start
	I0708 23:44:47.806605  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:47.841582  383102 provision.go:137] copyHostCerts
	I0708 23:44:47.841630  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem, removing ...
	I0708 23:44:47.841642  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem
	I0708 23:44:47.841700  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:44:47.841780  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem, removing ...
	I0708 23:44:47.841793  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem
	I0708 23:44:47.841816  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:44:47.841862  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem, removing ...
	I0708 23:44:47.841871  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem
	I0708 23:44:47.841892  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:44:47.841933  383102 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.pause-20210708233938-257783 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210708233938-257783]
	I0708 23:44:48.952877  383102 provision.go:171] copyRemoteCerts
	I0708 23:44:48.952938  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:44:48.952979  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:48.988956  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.069409  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:44:49.084030  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:44:49.098201  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 23:44:49.112707  383102 provision.go:86] duration metric: configureAuth took 1.306144285s
	I0708 23:44:49.112722  383102 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:44:49.112945  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.147842  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:49.148030  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:49.148050  383102 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:44:49.265435  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:44:49.265449  383102 machine.go:91] provisioned docker machine in 1.791277399s
	I0708 23:44:49.265466  383102 start.go:267] post-start starting for "pause-20210708233938-257783" (driver="docker")
	I0708 23:44:49.265473  383102 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:44:49.265521  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:44:49.265564  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.302440  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.385342  383102 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:44:49.387501  383102 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:44:49.387521  383102 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:44:49.387533  383102 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:44:49.387542  383102 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:44:49.387552  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:44:49.387592  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:44:49.387720  383102 start.go:270] post-start completed in 122.24664ms
	I0708 23:44:49.387753  383102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:44:49.387787  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.422565  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.503288  383102 fix.go:57] fixHost completed within 2.066821667s
	I0708 23:44:49.503310  383102 start.go:80] releasing machines lock for "pause-20210708233938-257783", held for 2.066864546s
	I0708 23:44:49.503369  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:49.537513  383102 ssh_runner.go:149] Run: systemctl --version
	I0708 23:44:49.537553  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.537599  383102 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:44:49.537656  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.578213  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.591758  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.667104  383102 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:44:49.802373  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:44:49.809858  383102 docker.go:153] disabling docker service ...
	I0708 23:44:49.809898  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:44:49.818109  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:44:49.826668  383102 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:44:49.957409  383102 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:44:50.082177  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:44:50.090087  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:44:50.100877  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.109868  383102 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:44:50.109919  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.116503  383102 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:44:50.121833  383102 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:44:50.126949  383102 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:44:50.251265  383102 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:44:50.259385  383102 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:44:50.259425  383102 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:44:50.261926  383102 start.go:411] Will wait 60s for crictl version
	I0708 23:44:50.261961  383102 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:44:50.286962  383102 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:44:50.287041  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.352750  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.423233  383102 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:44:50.423307  383102 cli_runner.go:115] Run: docker network inspect pause-20210708233938-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:44:50.464228  383102 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0708 23:44:50.467264  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:50.467314  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.490940  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.490957  383102 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:44:50.490993  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.512176  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.512192  383102 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:44:50.512245  383102 ssh_runner.go:149] Run: crio config
	I0708 23:44:50.587658  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:50.587677  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:50.587685  383102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:44:50.587790  383102 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210708233938-257783 NodeName:pause-20210708233938-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:44:50.587905  383102 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210708233938-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:44:50.587994  383102 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210708233938-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:44:50.588044  383102 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:44:50.593749  383102 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:44:50.593819  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:44:50.599162  383102 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0708 23:44:50.609681  383102 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:44:50.620170  383102 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1884 bytes)
	I0708 23:44:50.630479  383102 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:44:50.632974  383102 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783 for IP: 192.168.58.2
	I0708 23:44:50.633021  383102 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:44:50.633039  383102 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:44:50.633098  383102 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.key
	I0708 23:44:50.633117  383102 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key.cee25041
	I0708 23:44:50.633142  383102 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key
	I0708 23:44:50.633227  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem (1338 bytes)
	W0708 23:44:50.633268  383102 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783_empty.pem, impossibly tiny 0 bytes
	I0708 23:44:50.633280  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:44:50.633305  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:44:50.633332  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:44:50.633356  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:44:50.634343  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:44:50.648438  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 23:44:50.662480  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:44:50.677256  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:44:50.691568  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:44:50.705113  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:44:50.718728  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:44:50.733001  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:44:50.748832  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:44:50.762662  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem --> /usr/share/ca-certificates/257783.pem (1338 bytes)
	I0708 23:44:50.776552  383102 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:44:50.786598  383102 ssh_runner.go:149] Run: openssl version
	I0708 23:44:50.790834  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:44:50.796632  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799083  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799118  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.803062  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:44:50.808543  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257783.pem && ln -fs /usr/share/ca-certificates/257783.pem /etc/ssl/certs/257783.pem"
	I0708 23:44:50.814370  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816803  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul  8 23:18 /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816856  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.820832  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257783.pem /etc/ssl/certs/51391683.0"
	I0708 23:44:50.826095  383102 kubeadm.go:390] StartCluster: {Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:50.826162  383102 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:44:50.826221  383102 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:44:50.849897  383102 cri.go:76] found id: "b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7"
	I0708 23:44:50.849919  383102 cri.go:76] found id: "7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a"
	I0708 23:44:50.849943  383102 cri.go:76] found id: "aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e"
	I0708 23:44:50.849950  383102 cri.go:76] found id: "0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef"
	I0708 23:44:50.849954  383102 cri.go:76] found id: "66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e"
	I0708 23:44:50.849963  383102 cri.go:76] found id: "76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41"
	I0708 23:44:50.849967  383102 cri.go:76] found id: "f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c"
	I0708 23:44:50.849975  383102 cri.go:76] found id: ""
	I0708 23:44:50.850009  383102 ssh_runner.go:149] Run: sudo runc list -f json
	I0708 23:44:50.888444  383102 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","pid":1704,"status":"running","bundle":"/run/containers/storage/overlay-containers/0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef/userdata","rootfs":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","created":"2021-07-08T23:43:28.796258779Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c9d3bb9","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c9d3bb9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.409347635Z","io.kubernetes.cri-o.Image":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.2","io.kubernetes.cri-o.ImageRef":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/kube-controller-mana
ger/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_pa
th\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/containers/kube-controller-manager/b3e49874\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exe
c\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","pid":1454,"status":"running","bundle":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata","rootfs":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","created":"2021-07-08T23:43:27
.761173536Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.58.2:8443\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463729331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"48a917795140826e0af6da63b039926b\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.500118923Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9
e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"48a917795140826e0af6da63b039926b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210708233938-257783\",\"uid\":\"48a917795140826e0af6da63b039926b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","io.kubernete
s.cri-o.Name":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kub
e-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","pid":1696,"status":"running","bundle":"/run/containers/storage/overlay-containers/66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e/userdata","rootfs":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","created":"2021-07-08T23:43:28.74704463Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a5e28f4f","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationM
essagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a5e28f4f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.458654206Z","io.kubernetes.cri-o.Image":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.2","io.kubernetes.cri-o.ImageRef":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io
.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin"
:"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/containers/kube-scheduler/6f04df63\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStop
USec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41/userdata","rootfs":"/var/lib/containers/storage/overlay/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","created":"2021-07-08T23:43:28.33827029Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"364fba0d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"364fba0d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",
\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.136323454Z","io.kubernetes.cri-o.Image":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overla
y/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/containe
rs/etcd/486736f1\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","pid":2476,"status":"running","bundle":"/run/containers/storage/overlay-co
ntainers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata","rootfs":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc79466722b3777db9c24dd3c63a849026ee706e/merged","created":"2021-07-08T23:43:58.920322142Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.220124726Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.842364249Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":
"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-589hd","io.kubernetes.cri-o.Labels":"{\"app\":\"kindnet\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"tier\":\"node\",\"k8s-app\":\"kindnet\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-589hd\",\"uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc7946672
2b3777db9c24dd3c63a849026ee706e/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/shm","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"55f424f0-d7a4-418f-8572-27041384f3ba","k8s-app":"kindnet","ku
bernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a/userdata","rootfs":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","created":"2021-07-08T23:43:59.140412019Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"73cb1b1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"73cb1b1\",\"io.kub
ernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.029318377Z","io.kubernetes.cri-o.Image":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.2","io.kubernetes.cri-o.ImageRef":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e
2c-5d4d-4e26-9d87-bfe3d4715985/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"con
tainer_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/containers/kube-proxy/343cc99a\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~projected/kube-api-access-2vk7z\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","kubernetes.io/config.se
en":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","pid":1533,"status":"running","bundle":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata","rootfs":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","created":"2021-07-08T23:43:27.9820761Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"2349193ca86d9558bc895849265d2bbd\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.58.2:2379\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463758229Z\",\"kubernetes.io/config.source\":\"file\"}","io.kub
ernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.685254972Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.c
ontainer.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210708233938-257783\",\"uid\":\"2349193ca86d9558bc895849265d2bbd\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeH
andler":"","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","pid":1526,"status":"running","bundle":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd0
02fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata","rootfs":"/var/lib/containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","created":"2021-07-08T23:43:28.02245442Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"636f853856e082c029b85fb89a036300\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463757039Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.680724389Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true"
,"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210708233938-257783\",\"uid\":\"636f853856e082c029b85fb89a036300\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/
containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","pid":2536,"status":"running","bundle":"/run/containers/storage/overlay-containers/aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e/userdata","rootfs":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","created":"2021-07-08T23:43:59.094903496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"42880ebe","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"42880ebe\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.000934542Z","io.kubernetes.cri-o.Image":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"io.kubernetes.pod.namespace\":\"kube-system\"
,\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":
"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/containers/kindnet-cni/63efdea9\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/volumes/kubernetes.io~projected/kube-api-access-vxfqs\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"55f424f0-d7
a4-418f-8572-27041384f3ba","kubernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","pid":3117,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7/userdata","rootfs":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","created":"2021-07-08T23:44:44.929527419Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3ba99b8a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3ba99b8a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:44:44.869981068Z","io.kubernetes.cr
i-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ResolvPath":"/run/co
ntainers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/containers/coredns/ebcb451b\",\"readonly\":false},{\"contai
ner_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~projected/kube-api-access-wjk4b\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","pid":3088,"status":"running","bundle":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata","rootfs":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged",
"created":"2021-07-08T23:44:44.819414214Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:44:44.378304571Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethad721594\",\"mac\":\"fa:ff:ad:c6:25:66\"},{\"name\":\"eth0\",\"mac\":\"22:75:6a:ff:8f:5c\",\"sandbox\":\"/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T2
3:44:44.69371705Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-mnwpk\",\"uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"namespace\":\
"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","pid":1483,"status":"running","bundle":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata","rootfs":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","created":"2021-07-08T23:43:27.88584733Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463755710Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes
.io/config.hash\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.588501479Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\",\"io.kubernetes.container.name\":\"
POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210708233938-257783\",\"uid\":\"c0a79d1d801cddeaa32444663181957f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"tr
ue","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"edb6f1460db485be501f94018d5caf7a
576fdd2e67b51c15322cf821191a0ebb","pid":2500,"status":"running","bundle":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata","rootfs":"/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","created":"2021-07-08T23:43:58.96207639Z","annotations":{"controller-revision-hash":"6896ccdc5","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.246007990Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.878275037Z","io.kubernetes.cri-o.HostName":"pause-202107
08233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-rb2ws","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"6896ccdc5\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e2c-5d4d-4e26-9d87-bfe3d4715985/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-rb2ws\",\"uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"
/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/shm","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kuberne
tes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","pid":1608,"status":"running","bundle":"/run/containers/storage/overlay-containers/f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c/userdata","rootfs":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","created":"2021-07-08T23:43:28.292409803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"44b38584","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o
.Annotations":"{\"io.kubernetes.container.hash\":\"44b38584\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.165591981Z","io.kubernetes.cri-o.Image":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.2","io.kubernetes.cri-o.ImageRef":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"48a917795140826e0
af6da63b039926b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":
"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/containers/kube-apiserver/141310e0\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.nam
espace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0708 23:44:50.889436  383102 cri.go:113] list returned 14 containers
	I0708 23:44:50.889463  383102 cri.go:116] container: {ID:0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef Status:running}
	I0708 23:44:50.889494  383102 cri.go:122] skipping {0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef running}: state = "running", want "paused"
	I0708 23:44:50.889513  383102 cri.go:116] container: {ID:153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 Status:running}
	I0708 23:44:50.889538  383102 cri.go:118] skipping 153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 - not in ps
	I0708 23:44:50.889556  383102 cri.go:116] container: {ID:66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e Status:running}
	I0708 23:44:50.889571  383102 cri.go:122] skipping {66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e running}: state = "running", want "paused"
	I0708 23:44:50.889587  383102 cri.go:116] container: {ID:76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 Status:running}
	I0708 23:44:50.889601  383102 cri.go:122] skipping {76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 running}: state = "running", want "paused"
	I0708 23:44:50.889626  383102 cri.go:116] container: {ID:79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 Status:running}
	I0708 23:44:50.889644  383102 cri.go:118] skipping 79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 - not in ps
	I0708 23:44:50.889657  383102 cri.go:116] container: {ID:7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a Status:running}
	I0708 23:44:50.889671  383102 cri.go:122] skipping {7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a running}: state = "running", want "paused"
	I0708 23:44:50.889687  383102 cri.go:116] container: {ID:7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 Status:running}
	I0708 23:44:50.889711  383102 cri.go:118] skipping 7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 - not in ps
	I0708 23:44:50.889726  383102 cri.go:116] container: {ID:98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 Status:running}
	I0708 23:44:50.889739  383102 cri.go:118] skipping 98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 - not in ps
	I0708 23:44:50.889751  383102 cri.go:116] container: {ID:aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e Status:running}
	I0708 23:44:50.889763  383102 cri.go:122] skipping {aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e running}: state = "running", want "paused"
	I0708 23:44:50.889786  383102 cri.go:116] container: {ID:b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 Status:running}
	I0708 23:44:50.889802  383102 cri.go:122] skipping {b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 running}: state = "running", want "paused"
	I0708 23:44:50.889816  383102 cri.go:116] container: {ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f Status:running}
	I0708 23:44:50.889831  383102 cri.go:118] skipping ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f - not in ps
	I0708 23:44:50.889844  383102 cri.go:116] container: {ID:ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe Status:running}
	I0708 23:44:50.889868  383102 cri.go:118] skipping ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe - not in ps
	I0708 23:44:50.889884  383102 cri.go:116] container: {ID:edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb Status:running}
	I0708 23:44:50.889899  383102 cri.go:118] skipping edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb - not in ps
	I0708 23:44:50.889910  383102 cri.go:116] container: {ID:f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c Status:running}
	I0708 23:44:50.889924  383102 cri.go:122] skipping {f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c running}: state = "running", want "paused"
	I0708 23:44:50.889976  383102 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:44:50.896457  383102 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0708 23:44:50.896471  383102 kubeadm.go:600] restartCluster start
	I0708 23:44:50.896504  383102 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0708 23:44:50.901607  383102 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 23:44:50.902345  383102 kubeconfig.go:93] found "pause-20210708233938-257783" server: "https://192.168.58.2:8443"
	I0708 23:44:50.902810  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.904266  383102 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 23:44:50.910038  383102 api_server.go:164] Checking apiserver status ...
	I0708 23:44:50.910093  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:50.921551  383102 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	I0708 23:44:50.927266  383102 api_server.go:180] apiserver freezer: "11:freezer:/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope"
	I0708 23:44:50.927324  383102 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope/freezer.state
	I0708 23:44:50.932380  383102 api_server.go:202] freezer state: "THAWED"
	I0708 23:44:50.932400  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:50.940647  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:50.968340  383102 system_pods.go:86] 7 kube-system pods found
	I0708 23:44:50.968365  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:50.968372  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:50.968381  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:50.968389  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:50.968394  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:50.968404  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:50.968409  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:50.969071  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:50.969091  383102 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0708 23:44:50.969100  383102 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0708 23:44:50.969105  383102 kubeadm.go:604] restartCluster took 72.629672ms
	I0708 23:44:50.969114  383102 kubeadm.go:392] StartCluster complete in 143.022344ms
	I0708 23:44:50.969124  383102 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.969188  383102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:50.969783  383102 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.970369  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.973359  383102 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210708233938-257783" rescaled to 1
	I0708 23:44:50.973409  383102 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:44:50.977036  383102 out.go:165] * Verifying Kubernetes components...
	I0708 23:44:50.977080  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:50.973644  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:44:50.973655  383102 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0708 23:44:50.977189  383102 addons.go:59] Setting storage-provisioner=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977229  383102 addons.go:135] Setting addon storage-provisioner=true in "pause-20210708233938-257783"
	W0708 23:44:50.977246  383102 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:44:50.977293  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:50.977346  383102 addons.go:59] Setting default-storageclass=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977366  383102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210708233938-257783"
	I0708 23:44:50.977642  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:50.977846  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.040750  383102 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:44:51.040845  383102 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.040854  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:44:51.040902  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.059995  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:51.063879  383102 addons.go:135] Setting addon default-storageclass=true in "pause-20210708233938-257783"
	W0708 23:44:51.063911  383102 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:44:51.063955  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:51.064454  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.120151  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.133089  383102 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0708 23:44:51.133129  383102 node_ready.go:35] waiting up to 6m0s for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144796  383102 node_ready.go:49] node "pause-20210708233938-257783" has status "Ready":"True"
	I0708 23:44:51.144810  383102 node_ready.go:38] duration metric: took 11.663188ms waiting for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144817  383102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.151821  383102 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.151836  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:44:51.151881  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.162008  383102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178412  383102 pod_ready.go:92] pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.178425  383102 pod_ready.go:81] duration metric: took 16.393726ms waiting for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178434  383102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182215  383102 pod_ready.go:92] pod "etcd-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.182231  383102 pod_ready.go:81] duration metric: took 3.790081ms waiting for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182242  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185941  383102 pod_ready.go:92] pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.185957  383102 pod_ready.go:81] duration metric: took 3.703058ms waiting for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185966  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193311  383102 pod_ready.go:92] pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.193326  383102 pod_ready.go:81] duration metric: took 7.350387ms waiting for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193335  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.199623  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.228409  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.289804  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.544987  383102 pod_ready.go:92] pod "kube-proxy-rb2ws" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.545034  383102 pod_ready.go:81] duration metric: took 351.691462ms waiting for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.545056  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.611304  383102 out.go:165] * Enabled addons: storage-provisioner, default-storageclass
	I0708 23:44:51.611327  383102 addons.go:344] enableAddons completed in 637.673923ms
	I0708 23:44:51.944191  383102 pod_ready.go:92] pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.944240  383102 pod_ready.go:81] duration metric: took 399.15943ms waiting for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.944260  383102 pod_ready.go:38] duration metric: took 799.430802ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.944284  383102 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:44:51.944353  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:51.962521  383102 api_server.go:70] duration metric: took 989.086682ms to wait for apiserver process to appear ...
	I0708 23:44:51.962540  383102 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:44:51.962549  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:51.976017  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:51.976872  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:51.976889  383102 api_server.go:129] duration metric: took 14.342835ms to wait for apiserver health ...
	I0708 23:44:51.976896  383102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:44:52.147101  383102 system_pods.go:59] 8 kube-system pods found
	I0708 23:44:52.147126  383102 system_pods.go:61] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.147132  383102 system_pods.go:61] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.147156  383102 system_pods.go:61] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.147170  383102 system_pods.go:61] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.147175  383102 system_pods.go:61] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.147180  383102 system_pods.go:61] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.147188  383102 system_pods.go:61] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.147196  383102 system_pods.go:61] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.147205  383102 system_pods.go:74] duration metric: took 170.300522ms to wait for pod list to return data ...
	I0708 23:44:52.147214  383102 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:44:52.344080  383102 default_sa.go:45] found service account: "default"
	I0708 23:44:52.344097  383102 default_sa.go:55] duration metric: took 196.867452ms for default service account to be created ...
	I0708 23:44:52.344104  383102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:44:52.546575  383102 system_pods.go:86] 8 kube-system pods found
	I0708 23:44:52.546597  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.546603  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.546608  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.546614  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.546619  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.546624  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.546629  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.546638  383102 system_pods.go:89] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.546644  383102 system_pods.go:126] duration metric: took 202.535502ms to wait for k8s-apps to be running ...
	I0708 23:44:52.546651  383102 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:44:52.546691  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:52.554858  383102 system_svc.go:56] duration metric: took 8.204667ms WaitForService to wait for kubelet.
	I0708 23:44:52.554876  383102 kubeadm.go:547] duration metric: took 1.581445531s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:44:52.554910  383102 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:44:52.744446  383102 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:44:52.744473  383102 node_conditions.go:123] node cpu capacity is 2
	I0708 23:44:52.744486  383102 node_conditions.go:105] duration metric: took 189.57062ms to run NodePressure ...
	I0708 23:44:52.744495  383102 start.go:225] waiting for startup goroutines ...
	I0708 23:44:52.795296  383102 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:44:52.798688  383102 out.go:165] * Done! kubectl is now configured to use "pause-20210708233938-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:44:57 UTC. --
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752367414Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-mnwpk Namespace:kube-system ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f NetNS:/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752537333Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851068596Z" level=info msg="Ran pod sandbox ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f with infra container: kube-system/coredns-558bd4d5db-mnwpk/POD" id=c9132cb2-089f-4563-8891-94bd70e68b31 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851819090Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.852397777Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855119319Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855626237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.856387143Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870099418Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870133896Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/group: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944318612Z" level=info msg="Created container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944919240Z" level=info msg="Starting container: b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.955103274Z" level=info msg="Started container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:51 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:51.912211778Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.048464048Z" level=info msg="Ran pod sandbox 049e6b5335b3d37bd7b1f71f526dfb38a2146de747c5333e44dd562b58da320c with infra container: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049231829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049808794Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.050512018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051005283Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051652721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065018823Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065122749Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/group: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.130702118Z" level=info msg="Created container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.131445227Z" level=info msg="Starting container: ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.141586857Z" level=info msg="Started container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	ebc191d78d332       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 seconds ago        Running             storage-provisioner       0                   049e6b5335b3d
	b7d6404120fcb       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8   13 seconds ago       Running             coredns                   0                   ded1c1360c407
	7ca432c9b0953       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105   58 seconds ago       Running             kube-proxy                0                   edb6f1460db48
	aa26d8524150c       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301   58 seconds ago       Running             kindnet-cni               0                   79814c347cb14
	0cb308b9b448f       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630   About a minute ago   Running             kube-controller-manager   0                   ebf106620bd16
	66d5fee706a3d       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4   About a minute ago   Running             kube-scheduler            0                   98331c8576b70
	76999b0177398       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28   About a minute ago   Running             etcd                      0                   7df3a2be1b33d
	f275fc53ae00f       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0   About a minute ago   Running             kube-apiserver            0                   153d3d24ac6ae
	
	* 
	* ==> coredns [b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210708233938-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210708233938-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=pause-20210708233938-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_43_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210708233938-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:44:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210708233938-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                06c382d0-5723-4c28-97d9-2bf95fc86b49
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-mnwpk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-pause-20210708233938-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kindnet-589hd                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      60s
	  kube-system                 kube-apiserver-pause-20210708233938-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-pause-20210708233938-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-rb2ws                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-pause-20210708233938-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  91s (x8 over 92s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x7 over 92s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x7 over 92s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 70s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                19s                kubelet     Node pause-20210708233938-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000671] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000514] FS-Cache: N-cookie c=00000000e6b84f6b [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=0000000052778918 n=000000009967b9dc
	[  +0.000663] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +0.001810] FS-Cache: Duplicate cookie detected
	[  +0.000530] FS-Cache: O-cookie c=0000000057c7fc1d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=0000000052778918 n=00000000efae32c9
	[  +0.000673] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=00000000f56d3f5d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000863] FS-Cache: N-cookie d=0000000052778918 n=00000000e997ef03
	[  +0.000702] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +1.187985] FS-Cache: Duplicate cookie detected
	[  +0.000541] FS-Cache: O-cookie c=000000000ea7a21c [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000903] FS-Cache: O-cookie d=0000000052778918 n=00000000f7f72a4b
	[  +0.000697] FS-Cache: O-key=[8] '76e60b0000000000'
	[  +0.000532] FS-Cache: N-cookie c=00000000dc14d28d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=0000000052778918 n=00000000fd1ba8e6
	[  +0.000719] FS-Cache: N-key=[8] '76e60b0000000000'
	[  +0.299966] FS-Cache: Duplicate cookie detected
	[  +0.000563] FS-Cache: O-cookie c=00000000b39eb93d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000913] FS-Cache: O-cookie d=0000000052778918 n=00000000654c5f24
	[  +0.000696] FS-Cache: O-key=[8] '79e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=000000004dd4c5bf [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=0000000052778918 n=000000008dfb704a
	[  +0.000684] FS-Cache: N-key=[8] '79e60b0000000000'
	
	* 
	* ==> etcd [76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41] <==
	* 2021-07-08 23:43:29.374785 I | etcdserver: setting up the initial cluster version to 3.4
	2021-07-08 23:43:29.407011 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-07-08 23:43:29.407103 I | etcdserver/api: enabled capabilities for version 3.4
	2021-07-08 23:43:29.407157 I | etcdserver: published {Name:pause-20210708233938-257783 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-07-08 23:43:29.407465 I | embed: ready to serve client requests
	2021-07-08 23:43:29.415509 I | embed: serving client requests on 127.0.0.1:2379
	2021-07-08 23:43:29.423096 I | embed: ready to serve client requests
	2021-07-08 23:43:29.424420 I | embed: serving client requests on 192.168.58.2:2379
	2021-07-08 23:43:38.896326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:39.824755 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:heapster\" " with result "range_response_count:0 size:4" took too long (132.781472ms) to execute
	2021-07-08 23:43:40.062517 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (126.484476ms) to execute
	2021-07-08 23:43:40.062723 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:node-bootstrapper\" " with result "range_response_count:0 size:4" took too long (157.087895ms) to execute
	2021-07-08 23:43:41.406099 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:kube-scheduler\" " with result "range_response_count:0 size:5" took too long (106.875165ms) to execute
	2021-07-08 23:43:41.406344 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210708233938-257783\" " with result "range_response_count:1 size:5706" took too long (100.988866ms) to execute
	2021-07-08 23:43:41.800497 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler\" " with result "range_response_count:0 size:5" took too long (104.221848ms) to execute
	2021-07-08 23:43:42.415075 W | etcdserver: read-only range request "key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (113.749552ms) to execute
	2021-07-08 23:43:42.790083 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (139.951306ms) to execute
	2021-07-08 23:43:42.790797 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (104.526083ms) to execute
	2021-07-08 23:43:55.482711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:58.854149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:08.855081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:18.853976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:28.854207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:38.854850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:48.854356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:44:58 up  2:27,  0 users,  load average: 4.09, 2.92, 1.91
	Linux pause-20210708233938-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c] <==
	* I0708 23:43:38.659980       1 cache.go:39] Caches are synced for autoregister controller
	I0708 23:43:38.660019       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0708 23:43:38.765817       1 controller.go:611] quota admission added evaluator for: namespaces
	I0708 23:43:39.399647       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0708 23:43:39.399669       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 23:43:39.413314       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0708 23:43:39.428902       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0708 23:43:39.428920       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0708 23:43:42.417829       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 23:43:42.615824       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0708 23:43:42.951365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0708 23:43:42.952308       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 23:43:42.961082       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 23:43:44.101667       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 23:43:44.674905       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 23:43:44.719032       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 23:43:48.275800       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 23:43:58.042264       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0708 23:43:58.285534       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0708 23:44:04.479561       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:04.479599       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:04.479606       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:44:35.434880       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:35.434919       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:35.434927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef] <==
	* I0708 23:43:57.880761       1 shared_informer.go:247] Caches are synced for HPA 
	I0708 23:43:57.880837       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0708 23:43:57.907676       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0708 23:43:57.908756       1 shared_informer.go:247] Caches are synced for endpoint 
	I0708 23:43:57.952724       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:57.956872       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:58.003743       1 shared_informer.go:247] Caches are synced for deployment 
	I0708 23:43:58.061826       1 shared_informer.go:247] Caches are synced for disruption 
	I0708 23:43:58.061841       1 disruption.go:371] Sending events to api server.
	I0708 23:43:58.113116       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0708 23:43:58.121312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.138517       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.160698       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-589hd"
	I0708 23:43:58.238962       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb2ws"
	I0708 23:43:58.288443       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	E0708 23:43:58.326235       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"85756639-7788-414f-aae2-a95c8ac59acd", ResourceVersion:"309", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761384625, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d528a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000d528b8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001394920), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d52900), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394940)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394980)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014b4240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f18168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a56700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400135e8f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f181b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0708 23:43:58.358158       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-xtvks"
	I0708 23:43:58.384252       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-mnwpk"
	I0708 23:43:58.532367       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551775       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551796       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0708 23:43:58.636207       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0708 23:43:58.654856       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-xtvks"
	I0708 23:44:42.867632       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a] <==
	* I0708 23:43:59.522352       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0708 23:43:59.522418       1 server_others.go:140] Detected node IP 192.168.58.2
	W0708 23:43:59.522436       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:43:59.592863       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:43:59.592891       1 server_others.go:212] Using iptables Proxier.
	I0708 23:43:59.592900       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:43:59.592910       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:43:59.593168       1 server.go:643] Version: v1.21.2
	I0708 23:43:59.593489       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:43:59.593530       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:43:59.594089       1 config.go:315] Starting service config controller
	I0708 23:43:59.594140       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:43:59.594778       1 config.go:224] Starting endpoint slice config controller
	I0708 23:43:59.594818       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:43:59.596985       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0708 23:43:59.598797       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:43:59.695058       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0708 23:43:59.695065       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e] <==
	* E0708 23:43:38.663259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:43:38.663881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:38.663980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:38.664026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:38.664104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:38.664153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:38.667689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.506225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:39.684692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:39.707077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:39.715815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:43:39.739475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:39.927791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.950708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:40.026534       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:43:40.052611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.106259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.125654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.138747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:40.200954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:43:40.246523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 23:43:42.914398       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:44:58 UTC. --
	Jul 08 23:44:03 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:03.910585    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:08 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:08.911690    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:09 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:09.035899    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:13 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:13.913137    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:18 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:18.914320    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:19 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:19.141178    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:23 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:23.915497    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:28 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:28.916031    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:29 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:29.195302    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:39 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:39.262592    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.378521    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439076    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd8ce294-9dba-4d2e-8793-cc0862414323-config-volume\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439122    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk4b\" (UniqueName: \"kubernetes.io/projected/cd8ce294-9dba-4d2e-8793-cc0862414323-kube-api-access-wjk4b\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:49.318435    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:49.796926    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.049924    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.459815    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.106860    2084 container.go:586] Failed to update stats for container "/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7": /sys/fs/cgroup/cpuset/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.127776    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.610872    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679061    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndmf\" (UniqueName: \"kubernetes.io/projected/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-kube-api-access-6ndmf\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679129    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-tmp\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:52 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:52.247269    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:53 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:53.995237    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:56 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:56.896369    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	
	* 
	* ==> storage-provisioner [ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf] <==
	* I0708 23:44:52.156408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:44:52.170055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:44:52.170092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:44:52.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:44:52.181466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	I0708 23:44:52.181651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"812885a7-6ecb-4200-9882-e4b3a6fd0939", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409 became leader
	I0708 23:44:52.282548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210708233938-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context pause-20210708233938-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context pause-20210708233938-257783 describe pod : exit status 1 (56.102652ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context pause-20210708233938-257783 describe pod : exit status 1
--- FAIL: TestPause/serial/Pause (5.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (2.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-20210708233938-257783 --output=json --layout=cluster
status_test.go:79: expected command to fail, but it succeeded: out/minikube-linux-arm64 status -p pause-20210708233938-257783 --output=json --layout=cluster
<nil>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210708233938-257783
helpers_test.go:236: (dbg) docker inspect pause-20210708233938-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7",
	        "Created": "2021-07-08T23:42:55.939971333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:42:56.671510562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hosts",
	        "LogPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7-json.log",
	        "Name": "/pause-20210708233938-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210708233938-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210708233938-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210708233938-257783",
	                "Source": "/var/lib/docker/volumes/pause-20210708233938-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210708233938-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "name.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3364fc967f3a3a4f088daf2fc73d5bc45f12bb4867ba695dabf0ca91254c0104",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49617"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49616"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49613"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49615"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49614"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3364fc967f3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210708233938-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e0e986f196e",
	                        "pause-20210708233938-257783"
	                    ],
	                    "NetworkID": "7afb1bbd4669bf981affda6e21a0542828c16cc07887274e53996cdbb87c5e05",
	                    "EndpointID": "cf78b06b889a67153f813b6dd94cd8e9e0adb49ff2586b7f7058289d1b323f20",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210708233938-257783 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p pause-20210708233938-257783 logs -n 25: (1.43760804s)
helpers_test.go:253: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                         | multinode-20210708232645-257783            | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:36:18 UTC | Thu, 08 Jul 2021 23:36:23 UTC |
	|         | multinode-20210708232645-257783            |                                            |         |         |                               |                               |
	| start   | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:07 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --memory=2048 --driver=docker              |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:52 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:05 UTC | Thu, 08 Jul 2021 23:39:12 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:12 UTC | Thu, 08 Jul 2021 23:39:17 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210708233917-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:32 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | insufficient-storage-20210708233917-257783 |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | kubenet-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:39 UTC |
	|         | flannel-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p false-20210708233939-257783             | false-20210708233939-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:39 UTC | Thu, 08 Jul 2021 23:39:40 UTC |
	| start   | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:40 UTC | Thu, 08 Jul 2021 23:40:42 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:42 UTC | Thu, 08 Jul 2021 23:40:44 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:44 UTC | Thu, 08 Jul 2021 23:41:30 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:30 UTC | Thu, 08 Jul 2021 23:41:33 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:33 UTC | Thu, 08 Jul 2021 23:42:18 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | cert-options-20210708234133-257783         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:19 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:22 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:22 UTC | Thu, 08 Jul 2021 23:43:17 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:17 UTC | Thu, 08 Jul 2021 23:43:20 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:20 UTC | Thu, 08 Jul 2021 23:44:07 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:07 UTC | Thu, 08 Jul 2021 23:44:29 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:29 UTC | Thu, 08 Jul 2021 23:44:32 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:44:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:47 UTC | Thu, 08 Jul 2021 23:44:52 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:55 UTC | Thu, 08 Jul 2021 23:44:56 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:57 UTC | Thu, 08 Jul 2021 23:44:58 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:44:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:44:47.154451  383102 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:44:47.154571  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154583  383102 out.go:299] Setting ErrFile to fd 2...
	I0708 23:44:47.154587  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154704  383102 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:44:47.154960  383102 out.go:293] Setting JSON to false
	I0708 23:44:47.156021  383102 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8836,"bootTime":1625779051,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:44:47.156093  383102 start.go:121] virtualization:  
	I0708 23:44:47.158605  383102 out.go:165] * [pause-20210708233938-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:44:47.160748  383102 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:44:47.162569  383102 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:47.164384  383102 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:44:47.166094  383102 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:44:47.166892  383102 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:44:47.221034  383102 docker.go:132] docker version: linux-20.10.7
	I0708 23:44:47.221102  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.306208  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.254744355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.306303  383102 docker.go:244] overlay module found
	I0708 23:44:47.309309  383102 out.go:165] * Using the docker driver based on existing profile
	I0708 23:44:47.309327  383102 start.go:278] selected driver: docker
	I0708 23:44:47.309332  383102 start.go:751] validating driver "docker" against &{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.309419  383102 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0708 23:44:47.309784  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.393590  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.342522281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.393925  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:47.393941  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:47.393950  383102 start_flags.go:275] config:
	{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.396046  383102 out.go:165] * Starting control plane node pause-20210708233938-257783 in cluster pause-20210708233938-257783
	I0708 23:44:47.396084  383102 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:44:47.398019  383102 out.go:165] * Pulling base image ...
	I0708 23:44:47.398037  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:47.398068  383102 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:44:47.398080  383102 cache.go:56] Caching tarball of preloaded images
	I0708 23:44:47.398205  383102 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:44:47.398227  383102 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:44:47.398319  383102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/config.json ...
	I0708 23:44:47.398483  383102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:44:47.436290  383102 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:44:47.436316  383102 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:44:47.436330  383102 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:44:47.436359  383102 start.go:313] acquiring machines lock for pause-20210708233938-257783: {Name:mk0dd574f5aab82d7e948dc25f56eae9437435ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:44:47.436434  383102 start.go:317] acquired machines lock for "pause-20210708233938-257783" in 54.777µs
	I0708 23:44:47.436455  383102 start.go:93] Skipping create...Using existing machine configuration
	I0708 23:44:47.436464  383102 fix.go:55] fixHost starting: 
	I0708 23:44:47.436724  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:47.471771  383102 fix.go:108] recreateIfNeeded on pause-20210708233938-257783: state=Running err=<nil>
	W0708 23:44:47.471801  383102 fix.go:134] unexpected machine state, will restart: <nil>
	I0708 23:44:47.474143  383102 out.go:165] * Updating the running docker "pause-20210708233938-257783" container ...
	I0708 23:44:47.474165  383102 machine.go:88] provisioning docker machine ...
	I0708 23:44:47.474179  383102 ubuntu.go:169] provisioning hostname "pause-20210708233938-257783"
	I0708 23:44:47.474233  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.518727  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.518901  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.518913  383102 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210708233938-257783 && echo "pause-20210708233938-257783" | sudo tee /etc/hostname
	I0708 23:44:47.662054  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210708233938-257783
	
	I0708 23:44:47.662122  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.698564  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.698719  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.698745  383102 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210708233938-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210708233938-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210708233938-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:44:47.806503  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:44:47.806520  383102 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:44:47.806546  383102 ubuntu.go:177] setting up certificates
	I0708 23:44:47.806556  383102 provision.go:83] configureAuth start
	I0708 23:44:47.806605  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:47.841582  383102 provision.go:137] copyHostCerts
	I0708 23:44:47.841630  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem, removing ...
	I0708 23:44:47.841642  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem
	I0708 23:44:47.841700  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:44:47.841780  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem, removing ...
	I0708 23:44:47.841793  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem
	I0708 23:44:47.841816  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:44:47.841862  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem, removing ...
	I0708 23:44:47.841871  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem
	I0708 23:44:47.841892  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:44:47.841933  383102 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.pause-20210708233938-257783 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210708233938-257783]
	I0708 23:44:48.952877  383102 provision.go:171] copyRemoteCerts
	I0708 23:44:48.952938  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:44:48.952979  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:48.988956  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.069409  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:44:49.084030  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:44:49.098201  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 23:44:49.112707  383102 provision.go:86] duration metric: configureAuth took 1.306144285s
	I0708 23:44:49.112722  383102 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:44:49.112945  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.147842  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:49.148030  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:49.148050  383102 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:44:49.265435  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:44:49.265449  383102 machine.go:91] provisioned docker machine in 1.791277399s
	I0708 23:44:49.265466  383102 start.go:267] post-start starting for "pause-20210708233938-257783" (driver="docker")
	I0708 23:44:49.265473  383102 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:44:49.265521  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:44:49.265564  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.302440  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.385342  383102 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:44:49.387501  383102 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:44:49.387521  383102 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:44:49.387533  383102 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:44:49.387542  383102 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:44:49.387552  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:44:49.387592  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:44:49.387720  383102 start.go:270] post-start completed in 122.24664ms
	I0708 23:44:49.387753  383102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:44:49.387787  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.422565  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.503288  383102 fix.go:57] fixHost completed within 2.066821667s
	I0708 23:44:49.503310  383102 start.go:80] releasing machines lock for "pause-20210708233938-257783", held for 2.066864546s
	I0708 23:44:49.503369  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:49.537513  383102 ssh_runner.go:149] Run: systemctl --version
	I0708 23:44:49.537553  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.537599  383102 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:44:49.537656  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.578213  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.591758  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.667104  383102 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:44:49.802373  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:44:49.809858  383102 docker.go:153] disabling docker service ...
	I0708 23:44:49.809898  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:44:49.818109  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:44:49.826668  383102 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:44:49.957409  383102 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:44:50.082177  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:44:50.090087  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:44:50.100877  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.109868  383102 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:44:50.109919  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.116503  383102 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:44:50.121833  383102 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:44:50.126949  383102 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:44:50.251265  383102 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:44:50.259385  383102 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:44:50.259425  383102 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:44:50.261926  383102 start.go:411] Will wait 60s for crictl version
	I0708 23:44:50.261961  383102 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:44:50.286962  383102 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:44:50.287041  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.352750  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.423233  383102 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:44:50.423307  383102 cli_runner.go:115] Run: docker network inspect pause-20210708233938-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:44:50.464228  383102 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0708 23:44:50.467264  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:50.467314  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.490940  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.490957  383102 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:44:50.490993  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.512176  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.512192  383102 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:44:50.512245  383102 ssh_runner.go:149] Run: crio config
	I0708 23:44:50.587658  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:50.587677  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:50.587685  383102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:44:50.587790  383102 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210708233938-257783 NodeName:pause-20210708233938-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:44:50.587905  383102 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210708233938-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:44:50.587994  383102 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210708233938-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:44:50.588044  383102 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:44:50.593749  383102 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:44:50.593819  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:44:50.599162  383102 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0708 23:44:50.609681  383102 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:44:50.620170  383102 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1884 bytes)
	I0708 23:44:50.630479  383102 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:44:50.632974  383102 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783 for IP: 192.168.58.2
	I0708 23:44:50.633021  383102 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:44:50.633039  383102 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:44:50.633098  383102 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.key
	I0708 23:44:50.633117  383102 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key.cee25041
	I0708 23:44:50.633142  383102 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key
	I0708 23:44:50.633227  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem (1338 bytes)
	W0708 23:44:50.633268  383102 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783_empty.pem, impossibly tiny 0 bytes
	I0708 23:44:50.633280  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:44:50.633305  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:44:50.633332  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:44:50.633356  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:44:50.634343  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:44:50.648438  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 23:44:50.662480  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:44:50.677256  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:44:50.691568  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:44:50.705113  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:44:50.718728  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:44:50.733001  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:44:50.748832  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:44:50.762662  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem --> /usr/share/ca-certificates/257783.pem (1338 bytes)
	I0708 23:44:50.776552  383102 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:44:50.786598  383102 ssh_runner.go:149] Run: openssl version
	I0708 23:44:50.790834  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:44:50.796632  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799083  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799118  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.803062  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:44:50.808543  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257783.pem && ln -fs /usr/share/ca-certificates/257783.pem /etc/ssl/certs/257783.pem"
	I0708 23:44:50.814370  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816803  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul  8 23:18 /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816856  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.820832  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257783.pem /etc/ssl/certs/51391683.0"
	I0708 23:44:50.826095  383102 kubeadm.go:390] StartCluster: {Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:50.826162  383102 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:44:50.826221  383102 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:44:50.849897  383102 cri.go:76] found id: "b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7"
	I0708 23:44:50.849919  383102 cri.go:76] found id: "7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a"
	I0708 23:44:50.849943  383102 cri.go:76] found id: "aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e"
	I0708 23:44:50.849950  383102 cri.go:76] found id: "0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef"
	I0708 23:44:50.849954  383102 cri.go:76] found id: "66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e"
	I0708 23:44:50.849963  383102 cri.go:76] found id: "76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41"
	I0708 23:44:50.849967  383102 cri.go:76] found id: "f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c"
	I0708 23:44:50.849975  383102 cri.go:76] found id: ""
	I0708 23:44:50.850009  383102 ssh_runner.go:149] Run: sudo runc list -f json
	I0708 23:44:50.888444  383102 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","pid":1704,"status":"running","bundle":"/run/containers/storage/overlay-containers/0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef/userdata","rootfs":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","created":"2021-07-08T23:43:28.796258779Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c9d3bb9","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c9d3bb9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.409347635Z","io.kubernetes.cri-o.Image":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.2","io.kubernetes.cri-o.ImageRef":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/kube-controller-mana
ger/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_pa
th\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/containers/kube-controller-manager/b3e49874\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exe
c\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","pid":1454,"status":"running","bundle":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata","rootfs":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","created":"2021-07-08T23:43:27
.761173536Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.58.2:8443\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463729331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"48a917795140826e0af6da63b039926b\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.500118923Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9
e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"48a917795140826e0af6da63b039926b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210708233938-257783\",\"uid\":\"48a917795140826e0af6da63b039926b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","io.kubernete
s.cri-o.Name":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kub
e-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","pid":1696,"status":"running","bundle":"/run/containers/storage/overlay-containers/66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e/userdata","rootfs":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","created":"2021-07-08T23:43:28.74704463Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a5e28f4f","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationM
essagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a5e28f4f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.458654206Z","io.kubernetes.cri-o.Image":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.2","io.kubernetes.cri-o.ImageRef":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io
.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin"
:"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/containers/kube-scheduler/6f04df63\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStop
USec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41/userdata","rootfs":"/var/lib/containers/storage/overlay/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","created":"2021-07-08T23:43:28.33827029Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"364fba0d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"364fba0d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",
\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.136323454Z","io.kubernetes.cri-o.Image":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overla
y/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/containe
rs/etcd/486736f1\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","pid":2476,"status":"running","bundle":"/run/containers/storage/overlay-co
ntainers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata","rootfs":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc79466722b3777db9c24dd3c63a849026ee706e/merged","created":"2021-07-08T23:43:58.920322142Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.220124726Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.842364249Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":
"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-589hd","io.kubernetes.cri-o.Labels":"{\"app\":\"kindnet\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"tier\":\"node\",\"k8s-app\":\"kindnet\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-589hd\",\"uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc7946672
2b3777db9c24dd3c63a849026ee706e/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/shm","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"55f424f0-d7a4-418f-8572-27041384f3ba","k8s-app":"kindnet","ku
bernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a/userdata","rootfs":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","created":"2021-07-08T23:43:59.140412019Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"73cb1b1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"73cb1b1\",\"io.kub
ernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.029318377Z","io.kubernetes.cri-o.Image":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.2","io.kubernetes.cri-o.ImageRef":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e
2c-5d4d-4e26-9d87-bfe3d4715985/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"con
tainer_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/containers/kube-proxy/343cc99a\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~projected/kube-api-access-2vk7z\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","kubernetes.io/config.se
en":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","pid":1533,"status":"running","bundle":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata","rootfs":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","created":"2021-07-08T23:43:27.9820761Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"2349193ca86d9558bc895849265d2bbd\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.58.2:2379\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463758229Z\",\"kubernetes.io/config.source\":\"file\"}","io.kub
ernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.685254972Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.c
ontainer.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210708233938-257783\",\"uid\":\"2349193ca86d9558bc895849265d2bbd\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeH
andler":"","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","pid":1526,"status":"running","bundle":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd0
02fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata","rootfs":"/var/lib/containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","created":"2021-07-08T23:43:28.02245442Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"636f853856e082c029b85fb89a036300\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463757039Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.680724389Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true"
,"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210708233938-257783\",\"uid\":\"636f853856e082c029b85fb89a036300\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/
containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","pid":2536,"status":"running","bundle":"/run/containers/storage/overlay-containers/aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e/userdata","rootfs":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","created":"2021-07-08T23:43:59.094903496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"42880ebe","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"42880ebe\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.000934542Z","io.kubernetes.cri-o.Image":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"io.kubernetes.pod.namespace\":\"kube-system\"
,\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":
"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/containers/kindnet-cni/63efdea9\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/volumes/kubernetes.io~projected/kube-api-access-vxfqs\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"55f424f0-d7
a4-418f-8572-27041384f3ba","kubernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","pid":3117,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7/userdata","rootfs":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","created":"2021-07-08T23:44:44.929527419Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3ba99b8a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3ba99b8a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:44:44.869981068Z","io.kubernetes.cr
i-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ResolvPath":"/run/co
ntainers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/containers/coredns/ebcb451b\",\"readonly\":false},{\"contai
ner_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~projected/kube-api-access-wjk4b\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","pid":3088,"status":"running","bundle":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata","rootfs":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged",
"created":"2021-07-08T23:44:44.819414214Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:44:44.378304571Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethad721594\",\"mac\":\"fa:ff:ad:c6:25:66\"},{\"name\":\"eth0\",\"mac\":\"22:75:6a:ff:8f:5c\",\"sandbox\":\"/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T2
3:44:44.69371705Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-mnwpk\",\"uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"namespace\":\
"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","pid":1483,"status":"running","bundle":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata","rootfs":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","created":"2021-07-08T23:43:27.88584733Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463755710Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes
.io/config.hash\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.588501479Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\",\"io.kubernetes.container.name\":\"
POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210708233938-257783\",\"uid\":\"c0a79d1d801cddeaa32444663181957f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"tr
ue","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"edb6f1460db485be501f94018d5caf7a
576fdd2e67b51c15322cf821191a0ebb","pid":2500,"status":"running","bundle":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata","rootfs":"/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","created":"2021-07-08T23:43:58.96207639Z","annotations":{"controller-revision-hash":"6896ccdc5","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.246007990Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.878275037Z","io.kubernetes.cri-o.HostName":"pause-202107
08233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-rb2ws","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"6896ccdc5\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e2c-5d4d-4e26-9d87-bfe3d4715985/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-rb2ws\",\"uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"
/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/shm","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kuberne
tes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","pid":1608,"status":"running","bundle":"/run/containers/storage/overlay-containers/f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c/userdata","rootfs":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","created":"2021-07-08T23:43:28.292409803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"44b38584","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o
.Annotations":"{\"io.kubernetes.container.hash\":\"44b38584\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.165591981Z","io.kubernetes.cri-o.Image":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.2","io.kubernetes.cri-o.ImageRef":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"48a917795140826e0
af6da63b039926b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":
"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/containers/kube-apiserver/141310e0\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.nam
espace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0708 23:44:50.889436  383102 cri.go:113] list returned 14 containers
	I0708 23:44:50.889463  383102 cri.go:116] container: {ID:0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef Status:running}
	I0708 23:44:50.889494  383102 cri.go:122] skipping {0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef running}: state = "running", want "paused"
	I0708 23:44:50.889513  383102 cri.go:116] container: {ID:153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 Status:running}
	I0708 23:44:50.889538  383102 cri.go:118] skipping 153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 - not in ps
	I0708 23:44:50.889556  383102 cri.go:116] container: {ID:66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e Status:running}
	I0708 23:44:50.889571  383102 cri.go:122] skipping {66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e running}: state = "running", want "paused"
	I0708 23:44:50.889587  383102 cri.go:116] container: {ID:76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 Status:running}
	I0708 23:44:50.889601  383102 cri.go:122] skipping {76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 running}: state = "running", want "paused"
	I0708 23:44:50.889626  383102 cri.go:116] container: {ID:79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 Status:running}
	I0708 23:44:50.889644  383102 cri.go:118] skipping 79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 - not in ps
	I0708 23:44:50.889657  383102 cri.go:116] container: {ID:7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a Status:running}
	I0708 23:44:50.889671  383102 cri.go:122] skipping {7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a running}: state = "running", want "paused"
	I0708 23:44:50.889687  383102 cri.go:116] container: {ID:7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 Status:running}
	I0708 23:44:50.889711  383102 cri.go:118] skipping 7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 - not in ps
	I0708 23:44:50.889726  383102 cri.go:116] container: {ID:98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 Status:running}
	I0708 23:44:50.889739  383102 cri.go:118] skipping 98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 - not in ps
	I0708 23:44:50.889751  383102 cri.go:116] container: {ID:aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e Status:running}
	I0708 23:44:50.889763  383102 cri.go:122] skipping {aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e running}: state = "running", want "paused"
	I0708 23:44:50.889786  383102 cri.go:116] container: {ID:b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 Status:running}
	I0708 23:44:50.889802  383102 cri.go:122] skipping {b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 running}: state = "running", want "paused"
	I0708 23:44:50.889816  383102 cri.go:116] container: {ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f Status:running}
	I0708 23:44:50.889831  383102 cri.go:118] skipping ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f - not in ps
	I0708 23:44:50.889844  383102 cri.go:116] container: {ID:ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe Status:running}
	I0708 23:44:50.889868  383102 cri.go:118] skipping ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe - not in ps
	I0708 23:44:50.889884  383102 cri.go:116] container: {ID:edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb Status:running}
	I0708 23:44:50.889899  383102 cri.go:118] skipping edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb - not in ps
	I0708 23:44:50.889910  383102 cri.go:116] container: {ID:f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c Status:running}
	I0708 23:44:50.889924  383102 cri.go:122] skipping {f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c running}: state = "running", want "paused"
	I0708 23:44:50.889976  383102 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:44:50.896457  383102 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0708 23:44:50.896471  383102 kubeadm.go:600] restartCluster start
	I0708 23:44:50.896504  383102 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0708 23:44:50.901607  383102 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 23:44:50.902345  383102 kubeconfig.go:93] found "pause-20210708233938-257783" server: "https://192.168.58.2:8443"
	I0708 23:44:50.902810  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.904266  383102 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 23:44:50.910038  383102 api_server.go:164] Checking apiserver status ...
	I0708 23:44:50.910093  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:50.921551  383102 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	I0708 23:44:50.927266  383102 api_server.go:180] apiserver freezer: "11:freezer:/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope"
	I0708 23:44:50.927324  383102 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope/freezer.state
	I0708 23:44:50.932380  383102 api_server.go:202] freezer state: "THAWED"
	I0708 23:44:50.932400  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:50.940647  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:50.968340  383102 system_pods.go:86] 7 kube-system pods found
	I0708 23:44:50.968365  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:50.968372  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:50.968381  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:50.968389  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:50.968394  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:50.968404  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:50.968409  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:50.969071  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:50.969091  383102 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0708 23:44:50.969100  383102 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0708 23:44:50.969105  383102 kubeadm.go:604] restartCluster took 72.629672ms
	I0708 23:44:50.969114  383102 kubeadm.go:392] StartCluster complete in 143.022344ms
	I0708 23:44:50.969124  383102 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.969188  383102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:50.969783  383102 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.970369  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.973359  383102 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210708233938-257783" rescaled to 1
	I0708 23:44:50.973409  383102 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:44:50.977036  383102 out.go:165] * Verifying Kubernetes components...
	I0708 23:44:50.977080  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:50.973644  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:44:50.973655  383102 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0708 23:44:50.977189  383102 addons.go:59] Setting storage-provisioner=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977229  383102 addons.go:135] Setting addon storage-provisioner=true in "pause-20210708233938-257783"
	W0708 23:44:50.977246  383102 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:44:50.977293  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:50.977346  383102 addons.go:59] Setting default-storageclass=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977366  383102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210708233938-257783"
	I0708 23:44:50.977642  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:50.977846  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.040750  383102 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:44:51.040845  383102 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.040854  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:44:51.040902  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.059995  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:51.063879  383102 addons.go:135] Setting addon default-storageclass=true in "pause-20210708233938-257783"
	W0708 23:44:51.063911  383102 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:44:51.063955  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:51.064454  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.120151  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.133089  383102 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0708 23:44:51.133129  383102 node_ready.go:35] waiting up to 6m0s for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144796  383102 node_ready.go:49] node "pause-20210708233938-257783" has status "Ready":"True"
	I0708 23:44:51.144810  383102 node_ready.go:38] duration metric: took 11.663188ms waiting for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144817  383102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.151821  383102 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.151836  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:44:51.151881  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.162008  383102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178412  383102 pod_ready.go:92] pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.178425  383102 pod_ready.go:81] duration metric: took 16.393726ms waiting for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178434  383102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182215  383102 pod_ready.go:92] pod "etcd-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.182231  383102 pod_ready.go:81] duration metric: took 3.790081ms waiting for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182242  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185941  383102 pod_ready.go:92] pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.185957  383102 pod_ready.go:81] duration metric: took 3.703058ms waiting for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185966  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193311  383102 pod_ready.go:92] pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.193326  383102 pod_ready.go:81] duration metric: took 7.350387ms waiting for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193335  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.199623  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.228409  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.289804  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.544987  383102 pod_ready.go:92] pod "kube-proxy-rb2ws" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.545034  383102 pod_ready.go:81] duration metric: took 351.691462ms waiting for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.545056  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.611304  383102 out.go:165] * Enabled addons: storage-provisioner, default-storageclass
	I0708 23:44:51.611327  383102 addons.go:344] enableAddons completed in 637.673923ms
	I0708 23:44:51.944191  383102 pod_ready.go:92] pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.944240  383102 pod_ready.go:81] duration metric: took 399.15943ms waiting for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.944260  383102 pod_ready.go:38] duration metric: took 799.430802ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.944284  383102 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:44:51.944353  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:51.962521  383102 api_server.go:70] duration metric: took 989.086682ms to wait for apiserver process to appear ...
	I0708 23:44:51.962540  383102 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:44:51.962549  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:51.976017  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:51.976872  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:51.976889  383102 api_server.go:129] duration metric: took 14.342835ms to wait for apiserver health ...
	I0708 23:44:51.976896  383102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:44:52.147101  383102 system_pods.go:59] 8 kube-system pods found
	I0708 23:44:52.147126  383102 system_pods.go:61] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.147132  383102 system_pods.go:61] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.147156  383102 system_pods.go:61] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.147170  383102 system_pods.go:61] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.147175  383102 system_pods.go:61] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.147180  383102 system_pods.go:61] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.147188  383102 system_pods.go:61] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.147196  383102 system_pods.go:61] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.147205  383102 system_pods.go:74] duration metric: took 170.300522ms to wait for pod list to return data ...
	I0708 23:44:52.147214  383102 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:44:52.344080  383102 default_sa.go:45] found service account: "default"
	I0708 23:44:52.344097  383102 default_sa.go:55] duration metric: took 196.867452ms for default service account to be created ...
	I0708 23:44:52.344104  383102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:44:52.546575  383102 system_pods.go:86] 8 kube-system pods found
	I0708 23:44:52.546597  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.546603  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.546608  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.546614  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.546619  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.546624  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.546629  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.546638  383102 system_pods.go:89] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.546644  383102 system_pods.go:126] duration metric: took 202.535502ms to wait for k8s-apps to be running ...
	I0708 23:44:52.546651  383102 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:44:52.546691  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:52.554858  383102 system_svc.go:56] duration metric: took 8.204667ms WaitForService to wait for kubelet.
	I0708 23:44:52.554876  383102 kubeadm.go:547] duration metric: took 1.581445531s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:44:52.554910  383102 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:44:52.744446  383102 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:44:52.744473  383102 node_conditions.go:123] node cpu capacity is 2
	I0708 23:44:52.744486  383102 node_conditions.go:105] duration metric: took 189.57062ms to run NodePressure ...
	I0708 23:44:52.744495  383102 start.go:225] waiting for startup goroutines ...
	I0708 23:44:52.795296  383102 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:44:52.798688  383102 out.go:165] * Done! kubectl is now configured to use "pause-20210708233938-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:45:00 UTC. --
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752367414Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-mnwpk Namespace:kube-system ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f NetNS:/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752537333Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851068596Z" level=info msg="Ran pod sandbox ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f with infra container: kube-system/coredns-558bd4d5db-mnwpk/POD" id=c9132cb2-089f-4563-8891-94bd70e68b31 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851819090Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.852397777Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855119319Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855626237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.856387143Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870099418Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870133896Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/group: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944318612Z" level=info msg="Created container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944919240Z" level=info msg="Starting container: b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.955103274Z" level=info msg="Started container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:51 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:51.912211778Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.048464048Z" level=info msg="Ran pod sandbox 049e6b5335b3d37bd7b1f71f526dfb38a2146de747c5333e44dd562b58da320c with infra container: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049231829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049808794Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.050512018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051005283Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051652721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065018823Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065122749Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/group: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.130702118Z" level=info msg="Created container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.131445227Z" level=info msg="Starting container: ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.141586857Z" level=info msg="Started container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	ebc191d78d332       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 seconds ago        Running             storage-provisioner       0                   049e6b5335b3d
	b7d6404120fcb       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8   15 seconds ago       Running             coredns                   0                   ded1c1360c407
	7ca432c9b0953       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105   About a minute ago   Running             kube-proxy                0                   edb6f1460db48
	aa26d8524150c       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301   About a minute ago   Running             kindnet-cni               0                   79814c347cb14
	0cb308b9b448f       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630   About a minute ago   Running             kube-controller-manager   0                   ebf106620bd16
	66d5fee706a3d       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4   About a minute ago   Running             kube-scheduler            0                   98331c8576b70
	76999b0177398       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28   About a minute ago   Running             etcd                      0                   7df3a2be1b33d
	f275fc53ae00f       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0   About a minute ago   Running             kube-apiserver            0                   153d3d24ac6ae
	
	* 
	* ==> coredns [b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210708233938-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210708233938-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=pause-20210708233938-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_43_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210708233938-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:45:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210708233938-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                06c382d0-5723-4c28-97d9-2bf95fc86b49
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-mnwpk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     62s
	  kube-system                 etcd-pause-20210708233938-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-589hd                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      62s
	  kube-system                 kube-apiserver-pause-20210708233938-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-pause-20210708233938-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-rb2ws                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-pause-20210708233938-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  93s (x8 over 94s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x7 over 94s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x7 over 94s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 61s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                21s                kubelet     Node pause-20210708233938-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000671] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000514] FS-Cache: N-cookie c=00000000e6b84f6b [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=0000000052778918 n=000000009967b9dc
	[  +0.000663] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +0.001810] FS-Cache: Duplicate cookie detected
	[  +0.000530] FS-Cache: O-cookie c=0000000057c7fc1d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=0000000052778918 n=00000000efae32c9
	[  +0.000673] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=00000000f56d3f5d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000863] FS-Cache: N-cookie d=0000000052778918 n=00000000e997ef03
	[  +0.000702] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +1.187985] FS-Cache: Duplicate cookie detected
	[  +0.000541] FS-Cache: O-cookie c=000000000ea7a21c [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000903] FS-Cache: O-cookie d=0000000052778918 n=00000000f7f72a4b
	[  +0.000697] FS-Cache: O-key=[8] '76e60b0000000000'
	[  +0.000532] FS-Cache: N-cookie c=00000000dc14d28d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=0000000052778918 n=00000000fd1ba8e6
	[  +0.000719] FS-Cache: N-key=[8] '76e60b0000000000'
	[  +0.299966] FS-Cache: Duplicate cookie detected
	[  +0.000563] FS-Cache: O-cookie c=00000000b39eb93d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000913] FS-Cache: O-cookie d=0000000052778918 n=00000000654c5f24
	[  +0.000696] FS-Cache: O-key=[8] '79e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=000000004dd4c5bf [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=0000000052778918 n=000000008dfb704a
	[  +0.000684] FS-Cache: N-key=[8] '79e60b0000000000'
	
	* 
	* ==> etcd [76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41] <==
	* 2021-07-08 23:43:29.407011 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-07-08 23:43:29.407103 I | etcdserver/api: enabled capabilities for version 3.4
	2021-07-08 23:43:29.407157 I | etcdserver: published {Name:pause-20210708233938-257783 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-07-08 23:43:29.407465 I | embed: ready to serve client requests
	2021-07-08 23:43:29.415509 I | embed: serving client requests on 127.0.0.1:2379
	2021-07-08 23:43:29.423096 I | embed: ready to serve client requests
	2021-07-08 23:43:29.424420 I | embed: serving client requests on 192.168.58.2:2379
	2021-07-08 23:43:38.896326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:39.824755 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:heapster\" " with result "range_response_count:0 size:4" took too long (132.781472ms) to execute
	2021-07-08 23:43:40.062517 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (126.484476ms) to execute
	2021-07-08 23:43:40.062723 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:node-bootstrapper\" " with result "range_response_count:0 size:4" took too long (157.087895ms) to execute
	2021-07-08 23:43:41.406099 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:kube-scheduler\" " with result "range_response_count:0 size:5" took too long (106.875165ms) to execute
	2021-07-08 23:43:41.406344 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210708233938-257783\" " with result "range_response_count:1 size:5706" took too long (100.988866ms) to execute
	2021-07-08 23:43:41.800497 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler\" " with result "range_response_count:0 size:5" took too long (104.221848ms) to execute
	2021-07-08 23:43:42.415075 W | etcdserver: read-only range request "key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (113.749552ms) to execute
	2021-07-08 23:43:42.790083 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (139.951306ms) to execute
	2021-07-08 23:43:42.790797 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (104.526083ms) to execute
	2021-07-08 23:43:55.482711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:58.854149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:08.855081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:18.853976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:28.854207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:38.854850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:48.854356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:58.855084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:45:00 up  2:27,  0 users,  load average: 3.84, 2.88, 1.90
	Linux pause-20210708233938-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c] <==
	* I0708 23:43:38.659980       1 cache.go:39] Caches are synced for autoregister controller
	I0708 23:43:38.660019       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0708 23:43:38.765817       1 controller.go:611] quota admission added evaluator for: namespaces
	I0708 23:43:39.399647       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0708 23:43:39.399669       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 23:43:39.413314       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0708 23:43:39.428902       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0708 23:43:39.428920       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0708 23:43:42.417829       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 23:43:42.615824       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0708 23:43:42.951365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0708 23:43:42.952308       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 23:43:42.961082       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 23:43:44.101667       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 23:43:44.674905       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 23:43:44.719032       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 23:43:48.275800       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 23:43:58.042264       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0708 23:43:58.285534       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0708 23:44:04.479561       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:04.479599       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:04.479606       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:44:35.434880       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:35.434919       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:35.434927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef] <==
	* I0708 23:43:57.880761       1 shared_informer.go:247] Caches are synced for HPA 
	I0708 23:43:57.880837       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0708 23:43:57.907676       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0708 23:43:57.908756       1 shared_informer.go:247] Caches are synced for endpoint 
	I0708 23:43:57.952724       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:57.956872       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:58.003743       1 shared_informer.go:247] Caches are synced for deployment 
	I0708 23:43:58.061826       1 shared_informer.go:247] Caches are synced for disruption 
	I0708 23:43:58.061841       1 disruption.go:371] Sending events to api server.
	I0708 23:43:58.113116       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0708 23:43:58.121312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.138517       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.160698       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-589hd"
	I0708 23:43:58.238962       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb2ws"
	I0708 23:43:58.288443       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	E0708 23:43:58.326235       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"85756639-7788-414f-aae2-a95c8ac59acd", ResourceVersion:"309", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761384625, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d528a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000d528b8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001394920), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d52900), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394940)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394980)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014b4240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f18168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a56700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400135e8f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f181b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0708 23:43:58.358158       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-xtvks"
	I0708 23:43:58.384252       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-mnwpk"
	I0708 23:43:58.532367       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551775       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551796       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0708 23:43:58.636207       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0708 23:43:58.654856       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-xtvks"
	I0708 23:44:42.867632       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a] <==
	* I0708 23:43:59.522352       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0708 23:43:59.522418       1 server_others.go:140] Detected node IP 192.168.58.2
	W0708 23:43:59.522436       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:43:59.592863       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:43:59.592891       1 server_others.go:212] Using iptables Proxier.
	I0708 23:43:59.592900       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:43:59.592910       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:43:59.593168       1 server.go:643] Version: v1.21.2
	I0708 23:43:59.593489       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:43:59.593530       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:43:59.594089       1 config.go:315] Starting service config controller
	I0708 23:43:59.594140       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:43:59.594778       1 config.go:224] Starting endpoint slice config controller
	I0708 23:43:59.594818       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:43:59.596985       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0708 23:43:59.598797       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:43:59.695058       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0708 23:43:59.695065       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e] <==
	* E0708 23:43:38.663259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:43:38.663881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:38.663980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:38.664026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:38.664104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:38.664153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:38.667689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.506225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:39.684692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:39.707077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:39.715815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:43:39.739475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:39.927791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.950708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:40.026534       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:43:40.052611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.106259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.125654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.138747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:40.200954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:43:40.246523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 23:43:42.914398       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:45:01 UTC. --
	Jul 08 23:44:08 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:08.911690    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:09 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:09.035899    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:13 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:13.913137    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:18 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:18.914320    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:19 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:19.141178    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:23 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:23.915497    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:28 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:28.916031    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:29 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:29.195302    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:39 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:39.262592    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.378521    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439076    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd8ce294-9dba-4d2e-8793-cc0862414323-config-volume\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439122    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk4b\" (UniqueName: \"kubernetes.io/projected/cd8ce294-9dba-4d2e-8793-cc0862414323-kube-api-access-wjk4b\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:49.318435    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:49.796926    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.049924    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.459815    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.106860    2084 container.go:586] Failed to update stats for container "/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7": /sys/fs/cgroup/cpuset/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.127776    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.610872    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679061    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndmf\" (UniqueName: \"kubernetes.io/projected/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-kube-api-access-6ndmf\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679129    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-tmp\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:52 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:52.247269    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:53 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:53.995237    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:56 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:56.896369    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:59 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:59.371590    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> storage-provisioner [ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf] <==
	* I0708 23:44:52.156408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:44:52.170055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:44:52.170092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:44:52.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:44:52.181466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	I0708 23:44:52.181651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"812885a7-6ecb-4200-9882-e4b3a6fd0939", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409 became leader
	I0708 23:44:52.282548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210708233938-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPause/serial/VerifyStatus]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context pause-20210708233938-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context pause-20210708233938-257783 describe pod : exit status 1 (76.972944ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context pause-20210708233938-257783 describe pod : exit status 1
--- FAIL: TestPause/serial/VerifyStatus (2.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210708233938-257783 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-20210708233938-257783 --alsologtostderr -v=5: exit status 80 (2.357678121s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210708233938-257783 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:45:02.191487  385396 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:45:02.191661  385396 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:45:02.191684  385396 out.go:299] Setting ErrFile to fd 2...
	I0708 23:45:02.191706  385396 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:45:02.191838  385396 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:45:02.192018  385396 out.go:293] Setting JSON to false
	I0708 23:45:02.192060  385396 mustload.go:65] Loading cluster: pause-20210708233938-257783
	I0708 23:45:02.192821  385396 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:45:02.258115  385396 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:45:02.258801  385396 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 host-only-
nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0/minikube-v1.22.0.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool
=true) profile:pause-20210708233938-257783 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)]="(MISSING)"
	I0708 23:45:02.260967  385396 out.go:165] * Pausing node pause-20210708233938-257783 ... 
	I0708 23:45:02.260985  385396 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:45:02.261239  385396 ssh_runner.go:149] Run: systemctl --version
	I0708 23:45:02.261280  385396 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:45:02.301233  385396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:45:02.398934  385396 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:45:02.407053  385396 pause.go:50] kubelet running: true
	I0708 23:45:02.407112  385396 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:45:02.551844  385396 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0708 23:45:02.828257  385396 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:45:02.836405  385396 pause.go:50] kubelet running: true
	I0708 23:45:02.836445  385396 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:45:02.990884  385396 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0708 23:45:03.531224  385396 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:45:03.539117  385396 pause.go:50] kubelet running: true
	I0708 23:45:03.539163  385396 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:45:03.681906  385396 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0708 23:45:04.337389  385396 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:45:04.345796  385396 pause.go:50] kubelet running: true
	I0708 23:45:04.345836  385396 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0708 23:45:04.492987  385396 out.go:165] 
	W0708 23:45:04.493084  385396 out.go:230] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0708 23:45:04.493097  385396 out.go:230] * 
	* 
	W0708 23:45:04.496917  385396 out.go:230] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0708 23:45:04.499584  385396 out.go:165] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-20210708233938-257783 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210708233938-257783
helpers_test.go:236: (dbg) docker inspect pause-20210708233938-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7",
	        "Created": "2021-07-08T23:42:55.939971333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:42:56.671510562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hosts",
	        "LogPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7-json.log",
	        "Name": "/pause-20210708233938-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210708233938-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210708233938-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210708233938-257783",
	                "Source": "/var/lib/docker/volumes/pause-20210708233938-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210708233938-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "name.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3364fc967f3a3a4f088daf2fc73d5bc45f12bb4867ba695dabf0ca91254c0104",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49617"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49616"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49613"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49615"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49614"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3364fc967f3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210708233938-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e0e986f196e",
	                        "pause-20210708233938-257783"
	                    ],
	                    "NetworkID": "7afb1bbd4669bf981affda6e21a0542828c16cc07887274e53996cdbb87c5e05",
	                    "EndpointID": "cf78b06b889a67153f813b6dd94cd8e9e0adb49ff2586b7f7058289d1b323f20",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
E0708 23:45:04.592808  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210708233938-257783 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:38:52 UTC | Thu, 08 Jul 2021 23:38:52 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --cancel-scheduled                         |                                            |         |         |                               |                               |
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:05 UTC | Thu, 08 Jul 2021 23:39:12 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:12 UTC | Thu, 08 Jul 2021 23:39:17 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210708233917-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:32 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | insufficient-storage-20210708233917-257783 |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | kubenet-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:39 UTC |
	|         | flannel-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p false-20210708233939-257783             | false-20210708233939-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:39 UTC | Thu, 08 Jul 2021 23:39:40 UTC |
	| start   | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:40 UTC | Thu, 08 Jul 2021 23:40:42 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:42 UTC | Thu, 08 Jul 2021 23:40:44 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:44 UTC | Thu, 08 Jul 2021 23:41:30 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:30 UTC | Thu, 08 Jul 2021 23:41:33 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:33 UTC | Thu, 08 Jul 2021 23:42:18 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | cert-options-20210708234133-257783         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:19 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:22 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:22 UTC | Thu, 08 Jul 2021 23:43:17 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:17 UTC | Thu, 08 Jul 2021 23:43:20 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:20 UTC | Thu, 08 Jul 2021 23:44:07 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:07 UTC | Thu, 08 Jul 2021 23:44:29 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:29 UTC | Thu, 08 Jul 2021 23:44:32 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:44:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:47 UTC | Thu, 08 Jul 2021 23:44:52 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:55 UTC | Thu, 08 Jul 2021 23:44:56 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:57 UTC | Thu, 08 Jul 2021 23:44:58 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:59 UTC | Thu, 08 Jul 2021 23:45:01 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| unpause | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:45:01 UTC | Thu, 08 Jul 2021 23:45:02 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:44:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:44:47.154451  383102 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:44:47.154571  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154583  383102 out.go:299] Setting ErrFile to fd 2...
	I0708 23:44:47.154587  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154704  383102 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:44:47.154960  383102 out.go:293] Setting JSON to false
	I0708 23:44:47.156021  383102 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8836,"bootTime":1625779051,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:44:47.156093  383102 start.go:121] virtualization:  
	I0708 23:44:47.158605  383102 out.go:165] * [pause-20210708233938-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:44:47.160748  383102 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:44:47.162569  383102 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:47.164384  383102 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:44:47.166094  383102 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:44:47.166892  383102 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:44:47.221034  383102 docker.go:132] docker version: linux-20.10.7
	I0708 23:44:47.221102  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.306208  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.254744355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.306303  383102 docker.go:244] overlay module found
	I0708 23:44:47.309309  383102 out.go:165] * Using the docker driver based on existing profile
	I0708 23:44:47.309327  383102 start.go:278] selected driver: docker
	I0708 23:44:47.309332  383102 start.go:751] validating driver "docker" against &{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.309419  383102 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0708 23:44:47.309784  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.393590  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.342522281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.393925  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:47.393941  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:47.393950  383102 start_flags.go:275] config:
	{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.396046  383102 out.go:165] * Starting control plane node pause-20210708233938-257783 in cluster pause-20210708233938-257783
	I0708 23:44:47.396084  383102 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:44:47.398019  383102 out.go:165] * Pulling base image ...
	I0708 23:44:47.398037  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:47.398068  383102 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:44:47.398080  383102 cache.go:56] Caching tarball of preloaded images
	I0708 23:44:47.398205  383102 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:44:47.398227  383102 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:44:47.398319  383102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/config.json ...
	I0708 23:44:47.398483  383102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:44:47.436290  383102 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:44:47.436316  383102 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:44:47.436330  383102 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:44:47.436359  383102 start.go:313] acquiring machines lock for pause-20210708233938-257783: {Name:mk0dd574f5aab82d7e948dc25f56eae9437435ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:44:47.436434  383102 start.go:317] acquired machines lock for "pause-20210708233938-257783" in 54.777µs
	I0708 23:44:47.436455  383102 start.go:93] Skipping create...Using existing machine configuration
	I0708 23:44:47.436464  383102 fix.go:55] fixHost starting: 
	I0708 23:44:47.436724  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:47.471771  383102 fix.go:108] recreateIfNeeded on pause-20210708233938-257783: state=Running err=<nil>
	W0708 23:44:47.471801  383102 fix.go:134] unexpected machine state, will restart: <nil>
	I0708 23:44:47.474143  383102 out.go:165] * Updating the running docker "pause-20210708233938-257783" container ...
	I0708 23:44:47.474165  383102 machine.go:88] provisioning docker machine ...
	I0708 23:44:47.474179  383102 ubuntu.go:169] provisioning hostname "pause-20210708233938-257783"
	I0708 23:44:47.474233  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.518727  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.518901  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.518913  383102 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210708233938-257783 && echo "pause-20210708233938-257783" | sudo tee /etc/hostname
	I0708 23:44:47.662054  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210708233938-257783
	
	I0708 23:44:47.662122  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.698564  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.698719  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.698745  383102 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210708233938-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210708233938-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210708233938-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:44:47.806503  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:44:47.806520  383102 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:44:47.806546  383102 ubuntu.go:177] setting up certificates
	I0708 23:44:47.806556  383102 provision.go:83] configureAuth start
	I0708 23:44:47.806605  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:47.841582  383102 provision.go:137] copyHostCerts
	I0708 23:44:47.841630  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem, removing ...
	I0708 23:44:47.841642  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem
	I0708 23:44:47.841700  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:44:47.841780  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem, removing ...
	I0708 23:44:47.841793  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem
	I0708 23:44:47.841816  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:44:47.841862  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem, removing ...
	I0708 23:44:47.841871  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem
	I0708 23:44:47.841892  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:44:47.841933  383102 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.pause-20210708233938-257783 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210708233938-257783]
	I0708 23:44:48.952877  383102 provision.go:171] copyRemoteCerts
	I0708 23:44:48.952938  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:44:48.952979  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:48.988956  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.069409  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:44:49.084030  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:44:49.098201  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 23:44:49.112707  383102 provision.go:86] duration metric: configureAuth took 1.306144285s
	I0708 23:44:49.112722  383102 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:44:49.112945  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.147842  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:49.148030  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:49.148050  383102 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:44:49.265435  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:44:49.265449  383102 machine.go:91] provisioned docker machine in 1.791277399s
	I0708 23:44:49.265466  383102 start.go:267] post-start starting for "pause-20210708233938-257783" (driver="docker")
	I0708 23:44:49.265473  383102 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:44:49.265521  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:44:49.265564  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.302440  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.385342  383102 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:44:49.387501  383102 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:44:49.387521  383102 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:44:49.387533  383102 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:44:49.387542  383102 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:44:49.387552  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:44:49.387592  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:44:49.387720  383102 start.go:270] post-start completed in 122.24664ms
	I0708 23:44:49.387753  383102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:44:49.387787  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.422565  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.503288  383102 fix.go:57] fixHost completed within 2.066821667s
	I0708 23:44:49.503310  383102 start.go:80] releasing machines lock for "pause-20210708233938-257783", held for 2.066864546s
	I0708 23:44:49.503369  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:49.537513  383102 ssh_runner.go:149] Run: systemctl --version
	I0708 23:44:49.537553  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.537599  383102 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:44:49.537656  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.578213  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.591758  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.667104  383102 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:44:49.802373  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:44:49.809858  383102 docker.go:153] disabling docker service ...
	I0708 23:44:49.809898  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:44:49.818109  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:44:49.826668  383102 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:44:49.957409  383102 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:44:50.082177  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:44:50.090087  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:44:50.100877  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.109868  383102 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:44:50.109919  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.116503  383102 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:44:50.121833  383102 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:44:50.126949  383102 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:44:50.251265  383102 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:44:50.259385  383102 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:44:50.259425  383102 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:44:50.261926  383102 start.go:411] Will wait 60s for crictl version
	I0708 23:44:50.261961  383102 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:44:50.286962  383102 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:44:50.287041  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.352750  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.423233  383102 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:44:50.423307  383102 cli_runner.go:115] Run: docker network inspect pause-20210708233938-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:44:50.464228  383102 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0708 23:44:50.467264  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:50.467314  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.490940  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.490957  383102 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:44:50.490993  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.512176  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.512192  383102 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:44:50.512245  383102 ssh_runner.go:149] Run: crio config
	I0708 23:44:50.587658  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:50.587677  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:50.587685  383102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:44:50.587790  383102 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210708233938-257783 NodeName:pause-20210708233938-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:44:50.587905  383102 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210708233938-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:44:50.587994  383102 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210708233938-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:44:50.588044  383102 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:44:50.593749  383102 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:44:50.593819  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:44:50.599162  383102 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0708 23:44:50.609681  383102 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:44:50.620170  383102 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1884 bytes)
	I0708 23:44:50.630479  383102 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:44:50.632974  383102 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783 for IP: 192.168.58.2
	I0708 23:44:50.633021  383102 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:44:50.633039  383102 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:44:50.633098  383102 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.key
	I0708 23:44:50.633117  383102 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key.cee25041
	I0708 23:44:50.633142  383102 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key
	I0708 23:44:50.633227  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem (1338 bytes)
	W0708 23:44:50.633268  383102 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783_empty.pem, impossibly tiny 0 bytes
	I0708 23:44:50.633280  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:44:50.633305  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:44:50.633332  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:44:50.633356  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:44:50.634343  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:44:50.648438  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 23:44:50.662480  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:44:50.677256  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:44:50.691568  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:44:50.705113  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:44:50.718728  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:44:50.733001  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:44:50.748832  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:44:50.762662  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem --> /usr/share/ca-certificates/257783.pem (1338 bytes)
	I0708 23:44:50.776552  383102 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:44:50.786598  383102 ssh_runner.go:149] Run: openssl version
	I0708 23:44:50.790834  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:44:50.796632  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799083  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799118  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.803062  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:44:50.808543  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257783.pem && ln -fs /usr/share/ca-certificates/257783.pem /etc/ssl/certs/257783.pem"
	I0708 23:44:50.814370  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816803  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul  8 23:18 /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816856  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.820832  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257783.pem /etc/ssl/certs/51391683.0"
	I0708 23:44:50.826095  383102 kubeadm.go:390] StartCluster: {Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:50.826162  383102 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:44:50.826221  383102 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:44:50.849897  383102 cri.go:76] found id: "b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7"
	I0708 23:44:50.849919  383102 cri.go:76] found id: "7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a"
	I0708 23:44:50.849943  383102 cri.go:76] found id: "aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e"
	I0708 23:44:50.849950  383102 cri.go:76] found id: "0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef"
	I0708 23:44:50.849954  383102 cri.go:76] found id: "66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e"
	I0708 23:44:50.849963  383102 cri.go:76] found id: "76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41"
	I0708 23:44:50.849967  383102 cri.go:76] found id: "f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c"
	I0708 23:44:50.849975  383102 cri.go:76] found id: ""
	I0708 23:44:50.850009  383102 ssh_runner.go:149] Run: sudo runc list -f json
	I0708 23:44:50.888444  383102 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","pid":1704,"status":"running","bundle":"/run/containers/storage/overlay-containers/0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef/userdata","rootfs":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","created":"2021-07-08T23:43:28.796258779Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c9d3bb9","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c9d3bb9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.409347635Z","io.kubernetes.cri-o.Image":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.2","io.kubernetes.cri-o.ImageRef":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/kube-controller-mana
ger/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_pa
th\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/containers/kube-controller-manager/b3e49874\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exe
c\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","pid":1454,"status":"running","bundle":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata","rootfs":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","created":"2021-07-08T23:43:27
.761173536Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.58.2:8443\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463729331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"48a917795140826e0af6da63b039926b\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.500118923Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9
e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"48a917795140826e0af6da63b039926b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210708233938-257783\",\"uid\":\"48a917795140826e0af6da63b039926b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","io.kubernete
s.cri-o.Name":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kub
e-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","pid":1696,"status":"running","bundle":"/run/containers/storage/overlay-containers/66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e/userdata","rootfs":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","created":"2021-07-08T23:43:28.74704463Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a5e28f4f","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationM
essagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a5e28f4f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.458654206Z","io.kubernetes.cri-o.Image":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.2","io.kubernetes.cri-o.ImageRef":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io
.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin"
:"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/containers/kube-scheduler/6f04df63\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStop
USec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41/userdata","rootfs":"/var/lib/containers/storage/overlay/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","created":"2021-07-08T23:43:28.33827029Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"364fba0d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"364fba0d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",
\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.136323454Z","io.kubernetes.cri-o.Image":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overla
y/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/containe
rs/etcd/486736f1\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","pid":2476,"status":"running","bundle":"/run/containers/storage/overlay-co
ntainers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata","rootfs":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc79466722b3777db9c24dd3c63a849026ee706e/merged","created":"2021-07-08T23:43:58.920322142Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.220124726Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.842364249Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":
"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-589hd","io.kubernetes.cri-o.Labels":"{\"app\":\"kindnet\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"tier\":\"node\",\"k8s-app\":\"kindnet\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-589hd\",\"uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc7946672
2b3777db9c24dd3c63a849026ee706e/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/shm","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"55f424f0-d7a4-418f-8572-27041384f3ba","k8s-app":"kindnet","ku
bernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a/userdata","rootfs":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","created":"2021-07-08T23:43:59.140412019Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"73cb1b1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"73cb1b1\",\"io.kub
ernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.029318377Z","io.kubernetes.cri-o.Image":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.2","io.kubernetes.cri-o.ImageRef":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e
2c-5d4d-4e26-9d87-bfe3d4715985/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"con
tainer_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/containers/kube-proxy/343cc99a\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~projected/kube-api-access-2vk7z\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","kubernetes.io/config.se
en":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","pid":1533,"status":"running","bundle":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata","rootfs":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","created":"2021-07-08T23:43:27.9820761Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"2349193ca86d9558bc895849265d2bbd\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.58.2:2379\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463758229Z\",\"kubernetes.io/config.source\":\"file\"}","io.kub
ernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.685254972Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.c
ontainer.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210708233938-257783\",\"uid\":\"2349193ca86d9558bc895849265d2bbd\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeH
andler":"","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","pid":1526,"status":"running","bundle":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd0
02fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata","rootfs":"/var/lib/containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","created":"2021-07-08T23:43:28.02245442Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"636f853856e082c029b85fb89a036300\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463757039Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.680724389Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true"
,"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210708233938-257783\",\"uid\":\"636f853856e082c029b85fb89a036300\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/
containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","pid":2536,"status":"running","bundle":"/run/containers/storage/overlay-containers/aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e/userdata","rootfs":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","created":"2021-07-08T23:43:59.094903496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"42880ebe","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"42880ebe\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.000934542Z","io.kubernetes.cri-o.Image":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"io.kubernetes.pod.namespace\":\"kube-system\"
,\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":
"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/containers/kindnet-cni/63efdea9\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/volumes/kubernetes.io~projected/kube-api-access-vxfqs\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"55f424f0-d7
a4-418f-8572-27041384f3ba","kubernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","pid":3117,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7/userdata","rootfs":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","created":"2021-07-08T23:44:44.929527419Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3ba99b8a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3ba99b8a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:44:44.869981068Z","io.kubernetes.cr
i-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ResolvPath":"/run/co
ntainers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/containers/coredns/ebcb451b\",\"readonly\":false},{\"contai
ner_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~projected/kube-api-access-wjk4b\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","pid":3088,"status":"running","bundle":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata","rootfs":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged",
"created":"2021-07-08T23:44:44.819414214Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:44:44.378304571Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethad721594\",\"mac\":\"fa:ff:ad:c6:25:66\"},{\"name\":\"eth0\",\"mac\":\"22:75:6a:ff:8f:5c\",\"sandbox\":\"/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T2
3:44:44.69371705Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-mnwpk\",\"uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"namespace\":\
"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","pid":1483,"status":"running","bundle":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata","rootfs":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","created":"2021-07-08T23:43:27.88584733Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463755710Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes
.io/config.hash\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.588501479Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\",\"io.kubernetes.container.name\":\"
POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210708233938-257783\",\"uid\":\"c0a79d1d801cddeaa32444663181957f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"tr
ue","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"edb6f1460db485be501f94018d5caf7a
576fdd2e67b51c15322cf821191a0ebb","pid":2500,"status":"running","bundle":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata","rootfs":"/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","created":"2021-07-08T23:43:58.96207639Z","annotations":{"controller-revision-hash":"6896ccdc5","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.246007990Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.878275037Z","io.kubernetes.cri-o.HostName":"pause-202107
08233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-rb2ws","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"6896ccdc5\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e2c-5d4d-4e26-9d87-bfe3d4715985/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-rb2ws\",\"uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"
/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/shm","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kuberne
tes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","pid":1608,"status":"running","bundle":"/run/containers/storage/overlay-containers/f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c/userdata","rootfs":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","created":"2021-07-08T23:43:28.292409803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"44b38584","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o
.Annotations":"{\"io.kubernetes.container.hash\":\"44b38584\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.165591981Z","io.kubernetes.cri-o.Image":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.2","io.kubernetes.cri-o.ImageRef":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"48a917795140826e0
af6da63b039926b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":
"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/containers/kube-apiserver/141310e0\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.nam
espace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0708 23:44:50.889436  383102 cri.go:113] list returned 14 containers
	I0708 23:44:50.889463  383102 cri.go:116] container: {ID:0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef Status:running}
	I0708 23:44:50.889494  383102 cri.go:122] skipping {0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef running}: state = "running", want "paused"
	I0708 23:44:50.889513  383102 cri.go:116] container: {ID:153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 Status:running}
	I0708 23:44:50.889538  383102 cri.go:118] skipping 153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 - not in ps
	I0708 23:44:50.889556  383102 cri.go:116] container: {ID:66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e Status:running}
	I0708 23:44:50.889571  383102 cri.go:122] skipping {66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e running}: state = "running", want "paused"
	I0708 23:44:50.889587  383102 cri.go:116] container: {ID:76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 Status:running}
	I0708 23:44:50.889601  383102 cri.go:122] skipping {76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 running}: state = "running", want "paused"
	I0708 23:44:50.889626  383102 cri.go:116] container: {ID:79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 Status:running}
	I0708 23:44:50.889644  383102 cri.go:118] skipping 79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 - not in ps
	I0708 23:44:50.889657  383102 cri.go:116] container: {ID:7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a Status:running}
	I0708 23:44:50.889671  383102 cri.go:122] skipping {7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a running}: state = "running", want "paused"
	I0708 23:44:50.889687  383102 cri.go:116] container: {ID:7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 Status:running}
	I0708 23:44:50.889711  383102 cri.go:118] skipping 7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 - not in ps
	I0708 23:44:50.889726  383102 cri.go:116] container: {ID:98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 Status:running}
	I0708 23:44:50.889739  383102 cri.go:118] skipping 98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 - not in ps
	I0708 23:44:50.889751  383102 cri.go:116] container: {ID:aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e Status:running}
	I0708 23:44:50.889763  383102 cri.go:122] skipping {aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e running}: state = "running", want "paused"
	I0708 23:44:50.889786  383102 cri.go:116] container: {ID:b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 Status:running}
	I0708 23:44:50.889802  383102 cri.go:122] skipping {b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 running}: state = "running", want "paused"
	I0708 23:44:50.889816  383102 cri.go:116] container: {ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f Status:running}
	I0708 23:44:50.889831  383102 cri.go:118] skipping ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f - not in ps
	I0708 23:44:50.889844  383102 cri.go:116] container: {ID:ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe Status:running}
	I0708 23:44:50.889868  383102 cri.go:118] skipping ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe - not in ps
	I0708 23:44:50.889884  383102 cri.go:116] container: {ID:edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb Status:running}
	I0708 23:44:50.889899  383102 cri.go:118] skipping edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb - not in ps
	I0708 23:44:50.889910  383102 cri.go:116] container: {ID:f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c Status:running}
	I0708 23:44:50.889924  383102 cri.go:122] skipping {f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c running}: state = "running", want "paused"
	I0708 23:44:50.889976  383102 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:44:50.896457  383102 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0708 23:44:50.896471  383102 kubeadm.go:600] restartCluster start
	I0708 23:44:50.896504  383102 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0708 23:44:50.901607  383102 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 23:44:50.902345  383102 kubeconfig.go:93] found "pause-20210708233938-257783" server: "https://192.168.58.2:8443"
	I0708 23:44:50.902810  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.904266  383102 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 23:44:50.910038  383102 api_server.go:164] Checking apiserver status ...
	I0708 23:44:50.910093  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:50.921551  383102 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	I0708 23:44:50.927266  383102 api_server.go:180] apiserver freezer: "11:freezer:/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope"
	I0708 23:44:50.927324  383102 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope/freezer.state
	I0708 23:44:50.932380  383102 api_server.go:202] freezer state: "THAWED"
	I0708 23:44:50.932400  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:50.940647  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:50.968340  383102 system_pods.go:86] 7 kube-system pods found
	I0708 23:44:50.968365  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:50.968372  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:50.968381  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:50.968389  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:50.968394  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:50.968404  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:50.968409  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:50.969071  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:50.969091  383102 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0708 23:44:50.969100  383102 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0708 23:44:50.969105  383102 kubeadm.go:604] restartCluster took 72.629672ms
	I0708 23:44:50.969114  383102 kubeadm.go:392] StartCluster complete in 143.022344ms
	I0708 23:44:50.969124  383102 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.969188  383102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:50.969783  383102 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.970369  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.973359  383102 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210708233938-257783" rescaled to 1
	I0708 23:44:50.973409  383102 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:44:50.977036  383102 out.go:165] * Verifying Kubernetes components...
	I0708 23:44:50.977080  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:50.973644  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:44:50.973655  383102 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0708 23:44:50.977189  383102 addons.go:59] Setting storage-provisioner=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977229  383102 addons.go:135] Setting addon storage-provisioner=true in "pause-20210708233938-257783"
	W0708 23:44:50.977246  383102 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:44:50.977293  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:50.977346  383102 addons.go:59] Setting default-storageclass=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977366  383102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210708233938-257783"
	I0708 23:44:50.977642  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:50.977846  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.040750  383102 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:44:51.040845  383102 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.040854  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:44:51.040902  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.059995  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:51.063879  383102 addons.go:135] Setting addon default-storageclass=true in "pause-20210708233938-257783"
	W0708 23:44:51.063911  383102 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:44:51.063955  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:51.064454  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.120151  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.133089  383102 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0708 23:44:51.133129  383102 node_ready.go:35] waiting up to 6m0s for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144796  383102 node_ready.go:49] node "pause-20210708233938-257783" has status "Ready":"True"
	I0708 23:44:51.144810  383102 node_ready.go:38] duration metric: took 11.663188ms waiting for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144817  383102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.151821  383102 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.151836  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:44:51.151881  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.162008  383102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178412  383102 pod_ready.go:92] pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.178425  383102 pod_ready.go:81] duration metric: took 16.393726ms waiting for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178434  383102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182215  383102 pod_ready.go:92] pod "etcd-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.182231  383102 pod_ready.go:81] duration metric: took 3.790081ms waiting for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182242  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185941  383102 pod_ready.go:92] pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.185957  383102 pod_ready.go:81] duration metric: took 3.703058ms waiting for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185966  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193311  383102 pod_ready.go:92] pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.193326  383102 pod_ready.go:81] duration metric: took 7.350387ms waiting for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193335  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.199623  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.228409  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.289804  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.544987  383102 pod_ready.go:92] pod "kube-proxy-rb2ws" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.545034  383102 pod_ready.go:81] duration metric: took 351.691462ms waiting for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.545056  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.611304  383102 out.go:165] * Enabled addons: storage-provisioner, default-storageclass
	I0708 23:44:51.611327  383102 addons.go:344] enableAddons completed in 637.673923ms
	I0708 23:44:51.944191  383102 pod_ready.go:92] pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.944240  383102 pod_ready.go:81] duration metric: took 399.15943ms waiting for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.944260  383102 pod_ready.go:38] duration metric: took 799.430802ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.944284  383102 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:44:51.944353  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:51.962521  383102 api_server.go:70] duration metric: took 989.086682ms to wait for apiserver process to appear ...
	I0708 23:44:51.962540  383102 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:44:51.962549  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:51.976017  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:51.976872  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:51.976889  383102 api_server.go:129] duration metric: took 14.342835ms to wait for apiserver health ...
	I0708 23:44:51.976896  383102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:44:52.147101  383102 system_pods.go:59] 8 kube-system pods found
	I0708 23:44:52.147126  383102 system_pods.go:61] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.147132  383102 system_pods.go:61] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.147156  383102 system_pods.go:61] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.147170  383102 system_pods.go:61] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.147175  383102 system_pods.go:61] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.147180  383102 system_pods.go:61] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.147188  383102 system_pods.go:61] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.147196  383102 system_pods.go:61] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.147205  383102 system_pods.go:74] duration metric: took 170.300522ms to wait for pod list to return data ...
	I0708 23:44:52.147214  383102 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:44:52.344080  383102 default_sa.go:45] found service account: "default"
	I0708 23:44:52.344097  383102 default_sa.go:55] duration metric: took 196.867452ms for default service account to be created ...
	I0708 23:44:52.344104  383102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:44:52.546575  383102 system_pods.go:86] 8 kube-system pods found
	I0708 23:44:52.546597  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.546603  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.546608  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.546614  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.546619  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.546624  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.546629  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.546638  383102 system_pods.go:89] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.546644  383102 system_pods.go:126] duration metric: took 202.535502ms to wait for k8s-apps to be running ...
	I0708 23:44:52.546651  383102 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:44:52.546691  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:52.554858  383102 system_svc.go:56] duration metric: took 8.204667ms WaitForService to wait for kubelet.
	I0708 23:44:52.554876  383102 kubeadm.go:547] duration metric: took 1.581445531s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:44:52.554910  383102 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:44:52.744446  383102 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:44:52.744473  383102 node_conditions.go:123] node cpu capacity is 2
	I0708 23:44:52.744486  383102 node_conditions.go:105] duration metric: took 189.57062ms to run NodePressure ...
	I0708 23:44:52.744495  383102 start.go:225] waiting for startup goroutines ...
	I0708 23:44:52.795296  383102 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:44:52.798688  383102 out.go:165] * Done! kubectl is now configured to use "pause-20210708233938-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:45:05 UTC. --
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752367414Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-mnwpk Namespace:kube-system ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f NetNS:/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752537333Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851068596Z" level=info msg="Ran pod sandbox ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f with infra container: kube-system/coredns-558bd4d5db-mnwpk/POD" id=c9132cb2-089f-4563-8891-94bd70e68b31 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851819090Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.852397777Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855119319Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855626237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.856387143Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870099418Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870133896Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/group: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944318612Z" level=info msg="Created container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944919240Z" level=info msg="Starting container: b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.955103274Z" level=info msg="Started container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:51 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:51.912211778Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.048464048Z" level=info msg="Ran pod sandbox 049e6b5335b3d37bd7b1f71f526dfb38a2146de747c5333e44dd562b58da320c with infra container: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049231829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049808794Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.050512018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051005283Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051652721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065018823Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065122749Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/group: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.130702118Z" level=info msg="Created container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.131445227Z" level=info msg="Starting container: ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.141586857Z" level=info msg="Started container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	ebc191d78d332       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   13 seconds ago       Running             storage-provisioner       0                   049e6b5335b3d
	b7d6404120fcb       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8   20 seconds ago       Running             coredns                   0                   ded1c1360c407
	7ca432c9b0953       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105   About a minute ago   Running             kube-proxy                0                   edb6f1460db48
	aa26d8524150c       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301   About a minute ago   Running             kindnet-cni               0                   79814c347cb14
	0cb308b9b448f       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630   About a minute ago   Running             kube-controller-manager   0                   ebf106620bd16
	66d5fee706a3d       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4   About a minute ago   Running             kube-scheduler            0                   98331c8576b70
	76999b0177398       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28   About a minute ago   Running             etcd                      0                   7df3a2be1b33d
	f275fc53ae00f       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0   About a minute ago   Running             kube-apiserver            0                   153d3d24ac6ae
	
	* 
	* ==> coredns [b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210708233938-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210708233938-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=pause-20210708233938-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_43_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210708233938-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:45:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210708233938-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                06c382d0-5723-4c28-97d9-2bf95fc86b49
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-mnwpk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     67s
	  kube-system                 etcd-pause-20210708233938-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kindnet-589hd                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      67s
	  kube-system                 kube-apiserver-pause-20210708233938-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-pause-20210708233938-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-rb2ws                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-pause-20210708233938-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  98s (x8 over 99s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x7 over 99s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x7 over 99s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 77s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 66s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                26s                kubelet     Node pause-20210708233938-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000671] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000514] FS-Cache: N-cookie c=00000000e6b84f6b [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=0000000052778918 n=000000009967b9dc
	[  +0.000663] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +0.001810] FS-Cache: Duplicate cookie detected
	[  +0.000530] FS-Cache: O-cookie c=0000000057c7fc1d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=0000000052778918 n=00000000efae32c9
	[  +0.000673] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=00000000f56d3f5d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000863] FS-Cache: N-cookie d=0000000052778918 n=00000000e997ef03
	[  +0.000702] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +1.187985] FS-Cache: Duplicate cookie detected
	[  +0.000541] FS-Cache: O-cookie c=000000000ea7a21c [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000903] FS-Cache: O-cookie d=0000000052778918 n=00000000f7f72a4b
	[  +0.000697] FS-Cache: O-key=[8] '76e60b0000000000'
	[  +0.000532] FS-Cache: N-cookie c=00000000dc14d28d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=0000000052778918 n=00000000fd1ba8e6
	[  +0.000719] FS-Cache: N-key=[8] '76e60b0000000000'
	[  +0.299966] FS-Cache: Duplicate cookie detected
	[  +0.000563] FS-Cache: O-cookie c=00000000b39eb93d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000913] FS-Cache: O-cookie d=0000000052778918 n=00000000654c5f24
	[  +0.000696] FS-Cache: O-key=[8] '79e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=000000004dd4c5bf [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=0000000052778918 n=000000008dfb704a
	[  +0.000684] FS-Cache: N-key=[8] '79e60b0000000000'
	
	* 
	* ==> etcd [76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41] <==
	* 2021-07-08 23:43:29.407011 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-07-08 23:43:29.407103 I | etcdserver/api: enabled capabilities for version 3.4
	2021-07-08 23:43:29.407157 I | etcdserver: published {Name:pause-20210708233938-257783 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-07-08 23:43:29.407465 I | embed: ready to serve client requests
	2021-07-08 23:43:29.415509 I | embed: serving client requests on 127.0.0.1:2379
	2021-07-08 23:43:29.423096 I | embed: ready to serve client requests
	2021-07-08 23:43:29.424420 I | embed: serving client requests on 192.168.58.2:2379
	2021-07-08 23:43:38.896326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:39.824755 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:heapster\" " with result "range_response_count:0 size:4" took too long (132.781472ms) to execute
	2021-07-08 23:43:40.062517 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (126.484476ms) to execute
	2021-07-08 23:43:40.062723 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:node-bootstrapper\" " with result "range_response_count:0 size:4" took too long (157.087895ms) to execute
	2021-07-08 23:43:41.406099 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:kube-scheduler\" " with result "range_response_count:0 size:5" took too long (106.875165ms) to execute
	2021-07-08 23:43:41.406344 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210708233938-257783\" " with result "range_response_count:1 size:5706" took too long (100.988866ms) to execute
	2021-07-08 23:43:41.800497 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler\" " with result "range_response_count:0 size:5" took too long (104.221848ms) to execute
	2021-07-08 23:43:42.415075 W | etcdserver: read-only range request "key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (113.749552ms) to execute
	2021-07-08 23:43:42.790083 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (139.951306ms) to execute
	2021-07-08 23:43:42.790797 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (104.526083ms) to execute
	2021-07-08 23:43:55.482711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:58.854149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:08.855081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:18.853976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:28.854207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:38.854850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:48.854356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:58.855084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:45:05 up  2:27,  0 users,  load average: 3.69, 2.87, 1.90
	Linux pause-20210708233938-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c] <==
	* I0708 23:43:38.659980       1 cache.go:39] Caches are synced for autoregister controller
	I0708 23:43:38.660019       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0708 23:43:38.765817       1 controller.go:611] quota admission added evaluator for: namespaces
	I0708 23:43:39.399647       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0708 23:43:39.399669       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 23:43:39.413314       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0708 23:43:39.428902       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0708 23:43:39.428920       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0708 23:43:42.417829       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 23:43:42.615824       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0708 23:43:42.951365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0708 23:43:42.952308       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 23:43:42.961082       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 23:43:44.101667       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 23:43:44.674905       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 23:43:44.719032       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 23:43:48.275800       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 23:43:58.042264       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0708 23:43:58.285534       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0708 23:44:04.479561       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:04.479599       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:04.479606       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:44:35.434880       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:35.434919       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:35.434927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef] <==
	* I0708 23:43:57.880761       1 shared_informer.go:247] Caches are synced for HPA 
	I0708 23:43:57.880837       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0708 23:43:57.907676       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0708 23:43:57.908756       1 shared_informer.go:247] Caches are synced for endpoint 
	I0708 23:43:57.952724       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:57.956872       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:58.003743       1 shared_informer.go:247] Caches are synced for deployment 
	I0708 23:43:58.061826       1 shared_informer.go:247] Caches are synced for disruption 
	I0708 23:43:58.061841       1 disruption.go:371] Sending events to api server.
	I0708 23:43:58.113116       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0708 23:43:58.121312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.138517       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.160698       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-589hd"
	I0708 23:43:58.238962       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb2ws"
	I0708 23:43:58.288443       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	E0708 23:43:58.326235       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"85756639-7788-414f-aae2-a95c8ac59acd", ResourceVersion:"309", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761384625, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d528a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000d528b8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001394920), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d52900), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394940)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394980)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014b4240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f18168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a56700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400135e8f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f181b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0708 23:43:58.358158       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-xtvks"
	I0708 23:43:58.384252       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-mnwpk"
	I0708 23:43:58.532367       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551775       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551796       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0708 23:43:58.636207       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0708 23:43:58.654856       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-xtvks"
	I0708 23:44:42.867632       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a] <==
	* I0708 23:43:59.522352       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0708 23:43:59.522418       1 server_others.go:140] Detected node IP 192.168.58.2
	W0708 23:43:59.522436       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:43:59.592863       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:43:59.592891       1 server_others.go:212] Using iptables Proxier.
	I0708 23:43:59.592900       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:43:59.592910       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:43:59.593168       1 server.go:643] Version: v1.21.2
	I0708 23:43:59.593489       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:43:59.593530       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:43:59.594089       1 config.go:315] Starting service config controller
	I0708 23:43:59.594140       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:43:59.594778       1 config.go:224] Starting endpoint slice config controller
	I0708 23:43:59.594818       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:43:59.596985       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0708 23:43:59.598797       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:43:59.695058       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0708 23:43:59.695065       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e] <==
	* E0708 23:43:38.663259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:43:38.663881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:38.663980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:38.664026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:38.664104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:38.664153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:38.667689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.506225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:39.684692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:39.707077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:39.715815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:43:39.739475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:39.927791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.950708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:40.026534       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:43:40.052611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.106259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.125654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.138747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:40.200954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:43:40.246523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 23:43:42.914398       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:45:05 UTC. --
	Jul 08 23:44:09 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:09.035899    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:13 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:13.913137    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:18 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:18.914320    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:19 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:19.141178    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:23 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:23.915497    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:28 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:28.916031    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:29 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:29.195302    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:39 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:39.262592    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.378521    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439076    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd8ce294-9dba-4d2e-8793-cc0862414323-config-volume\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439122    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk4b\" (UniqueName: \"kubernetes.io/projected/cd8ce294-9dba-4d2e-8793-cc0862414323-kube-api-access-wjk4b\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:49.318435    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:49.796926    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.049924    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.459815    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.106860    2084 container.go:586] Failed to update stats for container "/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7": /sys/fs/cgroup/cpuset/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.127776    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.610872    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679061    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndmf\" (UniqueName: \"kubernetes.io/projected/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-kube-api-access-6ndmf\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679129    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-tmp\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:52 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:52.247269    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:53 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:53.995237    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:56 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:56.896369    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:59 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:59.371590    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:45:01 pause-20210708233938-257783 kubelet[2084]: W0708 23:45:01.193607    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	
	* 
	* ==> storage-provisioner [ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf] <==
	* I0708 23:44:52.156408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:44:52.170055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:44:52.170092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:44:52.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:44:52.181466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	I0708 23:44:52.181651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"812885a7-6ecb-4200-9882-e4b3a6fd0939", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409 became leader
	I0708 23:44:52.282548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210708233938-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context pause-20210708233938-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context pause-20210708233938-257783 describe pod : exit status 1 (59.812723ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context pause-20210708233938-257783 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect pause-20210708233938-257783
helpers_test.go:236: (dbg) docker inspect pause-20210708233938-257783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7",
	        "Created": "2021-07-08T23:42:55.939971333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-07-08T23:42:56.671510562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/hosts",
	        "LogPath": "/var/lib/docker/containers/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7-json.log",
	        "Name": "/pause-20210708233938-257783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210708233938-257783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210708233938-257783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d93ef718048edae9ec46ba0287dcb3ecd1e18c62c76d1f1094e91d758cb392d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210708233938-257783",
	                "Source": "/var/lib/docker/volumes/pause-20210708233938-257783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210708233938-257783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "name.minikube.sigs.k8s.io": "pause-20210708233938-257783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3364fc967f3a3a4f088daf2fc73d5bc45f12bb4867ba695dabf0ca91254c0104",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49617"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49616"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49613"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49615"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49614"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3364fc967f3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210708233938-257783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9e0e986f196e",
	                        "pause-20210708233938-257783"
	                    ],
	                    "NetworkID": "7afb1bbd4669bf981affda6e21a0542828c16cc07887274e53996cdbb87c5e05",
	                    "EndpointID": "cf78b06b889a67153f813b6dd94cd8e9e0adb49ff2586b7f7058289d1b323f20",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p pause-20210708233938-257783 logs -n 25
helpers_test.go:253: TestPause/serial/PauseAgain logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:05 UTC | Thu, 08 Jul 2021 23:39:12 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	|         | --schedule 5s                              |                                            |         |         |                               |                               |
	| delete  | -p                                         | scheduled-stop-20210708233807-257783       | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:12 UTC | Thu, 08 Jul 2021 23:39:17 UTC |
	|         | scheduled-stop-20210708233807-257783       |                                            |         |         |                               |                               |
	| delete  | -p                                         | insufficient-storage-20210708233917-257783 | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:32 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | insufficient-storage-20210708233917-257783 |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubenet-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:38 UTC |
	|         | kubenet-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p                                         | flannel-20210708233938-257783              | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:39:39 UTC |
	|         | flannel-20210708233938-257783              |                                            |         |         |                               |                               |
	| delete  | -p false-20210708233939-257783             | false-20210708233939-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:39 UTC | Thu, 08 Jul 2021 23:39:40 UTC |
	| start   | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:40 UTC | Thu, 08 Jul 2021 23:40:42 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                               |                               |
	|         | -v=5 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-env-20210708233940-257783    | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:42 UTC | Thu, 08 Jul 2021 23:40:44 UTC |
	|         | force-systemd-env-20210708233940-257783    |                                            |         |         |                               |                               |
	| start   | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:40:44 UTC | Thu, 08 Jul 2021 23:41:30 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	|         | --memory=2048 --force-systemd              |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=5 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | force-systemd-flag-20210708234044-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:30 UTC | Thu, 08 Jul 2021 23:41:33 UTC |
	|         | force-systemd-flag-20210708234044-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:41:33 UTC | Thu, 08 Jul 2021 23:42:18 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                  |                                            |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15              |                                            |         |         |                               |                               |
	|         | --apiserver-names=localhost                |                                            |         |         |                               |                               |
	|         | --apiserver-names=www.google.com           |                                            |         |         |                               |                               |
	|         | --apiserver-port=8555                      |                                            |         |         |                               |                               |
	|         | --driver=docker                            |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | cert-options-20210708234133-257783         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:19 UTC |
	|         | ssh openssl x509 -text -noout -in          |                                            |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt      |                                            |         |         |                               |                               |
	| delete  | -p                                         | cert-options-20210708234133-257783         | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:19 UTC | Thu, 08 Jul 2021 23:42:22 UTC |
	|         | cert-options-20210708234133-257783         |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:42:22 UTC | Thu, 08 Jul 2021 23:43:17 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0               |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| stop    | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:17 UTC | Thu, 08 Jul 2021 23:43:20 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:43:20 UTC | Thu, 08 Jul 2021 23:44:07 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:07 UTC | Thu, 08 Jul 2021 23:44:29 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	|         | --memory=2200                              |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-beta.0        |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker     |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| delete  | -p                                         | kubernetes-upgrade-20210708234222-257783   | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:29 UTC | Thu, 08 Jul 2021 23:44:32 UTC |
	|         | kubernetes-upgrade-20210708234222-257783   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:39:38 UTC | Thu, 08 Jul 2021 23:44:47 UTC |
	|         | --memory=2048                              |                                            |         |         |                               |                               |
	|         | --install-addons=false                     |                                            |         |         |                               |                               |
	|         | --wait=all --driver=docker                 |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| start   | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:47 UTC | Thu, 08 Jul 2021 23:44:52 UTC |
	|         | --alsologtostderr                          |                                            |         |         |                               |                               |
	|         | -v=1 --driver=docker                       |                                            |         |         |                               |                               |
	|         | --container-runtime=crio                   |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:55 UTC | Thu, 08 Jul 2021 23:44:56 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:57 UTC | Thu, 08 Jul 2021 23:44:58 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:44:59 UTC | Thu, 08 Jul 2021 23:45:01 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	| unpause | -p pause-20210708233938-257783             | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:45:01 UTC | Thu, 08 Jul 2021 23:45:02 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                               |                               |
	| -p      | pause-20210708233938-257783                | pause-20210708233938-257783                | jenkins | v1.22.0 | Thu, 08 Jul 2021 23:45:04 UTC | Thu, 08 Jul 2021 23:45:05 UTC |
	|         | logs -n 25                                 |                                            |         |         |                               |                               |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:44:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:44:47.154451  383102 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:44:47.154571  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154583  383102 out.go:299] Setting ErrFile to fd 2...
	I0708 23:44:47.154587  383102 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:44:47.154704  383102 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:44:47.154960  383102 out.go:293] Setting JSON to false
	I0708 23:44:47.156021  383102 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8836,"bootTime":1625779051,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:44:47.156093  383102 start.go:121] virtualization:  
	I0708 23:44:47.158605  383102 out.go:165] * [pause-20210708233938-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:44:47.160748  383102 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:44:47.162569  383102 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:47.164384  383102 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:44:47.166094  383102 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:44:47.166892  383102 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:44:47.221034  383102 docker.go:132] docker version: linux-20.10.7
	I0708 23:44:47.221102  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.306208  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.254744355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.306303  383102 docker.go:244] overlay module found
	I0708 23:44:47.309309  383102 out.go:165] * Using the docker driver based on existing profile
	I0708 23:44:47.309327  383102 start.go:278] selected driver: docker
	I0708 23:44:47.309332  383102 start.go:751] validating driver "docker" against &{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.309419  383102 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0708 23:44:47.309784  383102 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:44:47.393590  383102 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:49 SystemTime:2021-07-08 23:44:47.342522281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:44:47.393925  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:47.393941  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:47.393950  383102 start_flags.go:275] config:
	{Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:47.396046  383102 out.go:165] * Starting control plane node pause-20210708233938-257783 in cluster pause-20210708233938-257783
	I0708 23:44:47.396084  383102 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:44:47.398019  383102 out.go:165] * Pulling base image ...
	I0708 23:44:47.398037  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:47.398068  383102 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:44:47.398080  383102 cache.go:56] Caching tarball of preloaded images
	I0708 23:44:47.398205  383102 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0708 23:44:47.398227  383102 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0708 23:44:47.398319  383102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/config.json ...
	I0708 23:44:47.398483  383102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:44:47.436290  383102 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:44:47.436316  383102 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:44:47.436330  383102 cache.go:205] Successfully downloaded all kic artifacts
	I0708 23:44:47.436359  383102 start.go:313] acquiring machines lock for pause-20210708233938-257783: {Name:mk0dd574f5aab82d7e948dc25f56eae9437435ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0708 23:44:47.436434  383102 start.go:317] acquired machines lock for "pause-20210708233938-257783" in 54.777µs
	I0708 23:44:47.436455  383102 start.go:93] Skipping create...Using existing machine configuration
	I0708 23:44:47.436464  383102 fix.go:55] fixHost starting: 
	I0708 23:44:47.436724  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:47.471771  383102 fix.go:108] recreateIfNeeded on pause-20210708233938-257783: state=Running err=<nil>
	W0708 23:44:47.471801  383102 fix.go:134] unexpected machine state, will restart: <nil>
	I0708 23:44:47.474143  383102 out.go:165] * Updating the running docker "pause-20210708233938-257783" container ...
	I0708 23:44:47.474165  383102 machine.go:88] provisioning docker machine ...
	I0708 23:44:47.474179  383102 ubuntu.go:169] provisioning hostname "pause-20210708233938-257783"
	I0708 23:44:47.474233  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.518727  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.518901  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.518913  383102 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210708233938-257783 && echo "pause-20210708233938-257783" | sudo tee /etc/hostname
	I0708 23:44:47.662054  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210708233938-257783
	
	I0708 23:44:47.662122  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:47.698564  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:47.698719  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:47.698745  383102 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210708233938-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210708233938-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210708233938-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0708 23:44:47.806503  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0708 23:44:47.806520  383102 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0708 23:44:47.806546  383102 ubuntu.go:177] setting up certificates
	I0708 23:44:47.806556  383102 provision.go:83] configureAuth start
	I0708 23:44:47.806605  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:47.841582  383102 provision.go:137] copyHostCerts
	I0708 23:44:47.841630  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem, removing ...
	I0708 23:44:47.841642  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem
	I0708 23:44:47.841700  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0708 23:44:47.841780  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem, removing ...
	I0708 23:44:47.841793  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem
	I0708 23:44:47.841816  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0708 23:44:47.841862  383102 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem, removing ...
	I0708 23:44:47.841871  383102 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem
	I0708 23:44:47.841892  383102 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0708 23:44:47.841933  383102 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.pause-20210708233938-257783 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20210708233938-257783]
	I0708 23:44:48.952877  383102 provision.go:171] copyRemoteCerts
	I0708 23:44:48.952938  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0708 23:44:48.952979  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:48.988956  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.069409  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0708 23:44:49.084030  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0708 23:44:49.098201  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0708 23:44:49.112707  383102 provision.go:86] duration metric: configureAuth took 1.306144285s
	I0708 23:44:49.112722  383102 ubuntu.go:193] setting minikube options for container-runtime
	I0708 23:44:49.112945  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.147842  383102 main.go:130] libmachine: Using SSH client type: native
	I0708 23:44:49.148030  383102 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49617 <nil> <nil>}
	I0708 23:44:49.148050  383102 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0708 23:44:49.265435  383102 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0708 23:44:49.265449  383102 machine.go:91] provisioned docker machine in 1.791277399s
	I0708 23:44:49.265466  383102 start.go:267] post-start starting for "pause-20210708233938-257783" (driver="docker")
	I0708 23:44:49.265473  383102 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0708 23:44:49.265521  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0708 23:44:49.265564  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.302440  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.385342  383102 ssh_runner.go:149] Run: cat /etc/os-release
	I0708 23:44:49.387501  383102 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0708 23:44:49.387521  383102 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0708 23:44:49.387533  383102 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0708 23:44:49.387542  383102 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0708 23:44:49.387552  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0708 23:44:49.387592  383102 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0708 23:44:49.387720  383102 start.go:270] post-start completed in 122.24664ms
	I0708 23:44:49.387753  383102 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:44:49.387787  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.422565  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.503288  383102 fix.go:57] fixHost completed within 2.066821667s
	I0708 23:44:49.503310  383102 start.go:80] releasing machines lock for "pause-20210708233938-257783", held for 2.066864546s
	I0708 23:44:49.503369  383102 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20210708233938-257783
	I0708 23:44:49.537513  383102 ssh_runner.go:149] Run: systemctl --version
	I0708 23:44:49.537553  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.537599  383102 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0708 23:44:49.537656  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:49.578213  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.591758  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:49.667104  383102 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0708 23:44:49.802373  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0708 23:44:49.809858  383102 docker.go:153] disabling docker service ...
	I0708 23:44:49.809898  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0708 23:44:49.818109  383102 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0708 23:44:49.826668  383102 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0708 23:44:49.957409  383102 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0708 23:44:50.082177  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0708 23:44:50.090087  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0708 23:44:50.100877  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.109868  383102 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0708 23:44:50.109919  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0708 23:44:50.116503  383102 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0708 23:44:50.121833  383102 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0708 23:44:50.126949  383102 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0708 23:44:50.251265  383102 ssh_runner.go:149] Run: sudo systemctl start crio
	I0708 23:44:50.259385  383102 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0708 23:44:50.259425  383102 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0708 23:44:50.261926  383102 start.go:411] Will wait 60s for crictl version
	I0708 23:44:50.261961  383102 ssh_runner.go:149] Run: sudo crictl version
	I0708 23:44:50.286962  383102 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0708 23:44:50.287041  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.352750  383102 ssh_runner.go:149] Run: crio --version
	I0708 23:44:50.423233  383102 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0708 23:44:50.423307  383102 cli_runner.go:115] Run: docker network inspect pause-20210708233938-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0708 23:44:50.464228  383102 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0708 23:44:50.467264  383102 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:44:50.467314  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.490940  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.490957  383102 crio.go:333] Images already preloaded, skipping extraction
	I0708 23:44:50.490993  383102 ssh_runner.go:149] Run: sudo crictl images --output json
	I0708 23:44:50.512176  383102 crio.go:424] all images are preloaded for cri-o runtime.
	I0708 23:44:50.512192  383102 cache_images.go:74] Images are preloaded, skipping loading
	I0708 23:44:50.512245  383102 ssh_runner.go:149] Run: crio config
	I0708 23:44:50.587658  383102 cni.go:93] Creating CNI manager for ""
	I0708 23:44:50.587677  383102 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:44:50.587685  383102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0708 23:44:50.587790  383102 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210708233938-257783 NodeName:pause-20210708233938-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0708 23:44:50.587905  383102 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "pause-20210708233938-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0708 23:44:50.587994  383102 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-20210708233938-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0708 23:44:50.588044  383102 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0708 23:44:50.593749  383102 binaries.go:44] Found k8s binaries, skipping transfer
	I0708 23:44:50.593819  383102 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0708 23:44:50.599162  383102 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (558 bytes)
	I0708 23:44:50.609681  383102 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0708 23:44:50.620170  383102 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1884 bytes)
	I0708 23:44:50.630479  383102 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0708 23:44:50.632974  383102 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783 for IP: 192.168.58.2
	I0708 23:44:50.633021  383102 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0708 23:44:50.633039  383102 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0708 23:44:50.633098  383102 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.key
	I0708 23:44:50.633117  383102 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key.cee25041
	I0708 23:44:50.633142  383102 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key
	I0708 23:44:50.633227  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem (1338 bytes)
	W0708 23:44:50.633268  383102 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783_empty.pem, impossibly tiny 0 bytes
	I0708 23:44:50.633280  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0708 23:44:50.633305  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0708 23:44:50.633332  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0708 23:44:50.633356  383102 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0708 23:44:50.634343  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0708 23:44:50.648438  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0708 23:44:50.662480  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0708 23:44:50.677256  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0708 23:44:50.691568  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0708 23:44:50.705113  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0708 23:44:50.718728  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0708 23:44:50.733001  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0708 23:44:50.748832  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0708 23:44:50.762662  383102 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem --> /usr/share/ca-certificates/257783.pem (1338 bytes)
	I0708 23:44:50.776552  383102 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0708 23:44:50.786598  383102 ssh_runner.go:149] Run: openssl version
	I0708 23:44:50.790834  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0708 23:44:50.796632  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799083  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.799118  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0708 23:44:50.803062  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0708 23:44:50.808543  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257783.pem && ln -fs /usr/share/ca-certificates/257783.pem /etc/ssl/certs/257783.pem"
	I0708 23:44:50.814370  383102 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816803  383102 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul  8 23:18 /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.816856  383102 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257783.pem
	I0708 23:44:50.820832  383102 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257783.pem /etc/ssl/certs/51391683.0"
	I0708 23:44:50.826095  383102 kubeadm.go:390] StartCluster: {Name:pause-20210708233938-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:pause-20210708233938-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:44:50.826162  383102 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0708 23:44:50.826221  383102 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0708 23:44:50.849897  383102 cri.go:76] found id: "b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7"
	I0708 23:44:50.849919  383102 cri.go:76] found id: "7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a"
	I0708 23:44:50.849943  383102 cri.go:76] found id: "aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e"
	I0708 23:44:50.849950  383102 cri.go:76] found id: "0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef"
	I0708 23:44:50.849954  383102 cri.go:76] found id: "66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e"
	I0708 23:44:50.849963  383102 cri.go:76] found id: "76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41"
	I0708 23:44:50.849967  383102 cri.go:76] found id: "f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c"
	I0708 23:44:50.849975  383102 cri.go:76] found id: ""
	I0708 23:44:50.850009  383102 ssh_runner.go:149] Run: sudo runc list -f json
	I0708 23:44:50.888444  383102 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","pid":1704,"status":"running","bundle":"/run/containers/storage/overlay-containers/0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef/userdata","rootfs":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","created":"2021-07-08T23:43:28.796258779Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1c9d3bb9","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1c9d3bb9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.409347635Z","io.kubernetes.cri-o.Image":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.21.2","io.kubernetes.cri-o.ImageRef":"9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/kube-controller-mana
ger/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cdce3ed6af07ab111ab2fb108c2309db54d9634ce1811e68896b699446ff3e45/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_pa
th\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/containers/kube-controller-manager/b3e49874\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c0a79d1d801cddeaa32444663181957f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exe
c\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","pid":1454,"status":"running","bundle":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata","rootfs":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","created":"2021-07-08T23:43:27
.761173536Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.58.2:8443\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463729331Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"48a917795140826e0af6da63b039926b\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.500118923Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9
e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"48a917795140826e0af6da63b039926b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-20210708233938-257783\",\"uid\":\"48a917795140826e0af6da63b039926b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/995db9ddd5bff03a3e4252f22825a88a5095babda303cc304d0d9f42db6e7025/merged","io.kubernete
s.cri-o.Name":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kub
e-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","pid":1696,"status":"running","bundle":"/run/containers/storage/overlay-containers/66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e/userdata","rootfs":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","created":"2021-07-08T23:43:28.74704463Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a5e28f4f","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationM
essagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a5e28f4f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.458654206Z","io.kubernetes.cri-o.Image":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.21.2","io.kubernetes.cri-o.ImageRef":"ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io
.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c744810c8a09ccc54eaf6b538b13405ff75025ea0fcdf7c4f79b45507c315ea4/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin"
:"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/636f853856e082c029b85fb89a036300/containers/kube-scheduler/6f04df63\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStop
USec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","pid":1601,"status":"running","bundle":"/run/containers/storage/overlay-containers/76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41/userdata","rootfs":"/var/lib/containers/storage/overlay/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","created":"2021-07-08T23:43:28.33827029Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"364fba0d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"364fba0d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",
\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.136323454Z","io.kubernetes.cri-o.Image":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.13-0","io.kubernetes.cri-o.ImageRef":"05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overla
y/01ae8557075556025556e28b3617bfe934a965557cd8fd4d435456c30b0c4d27/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2349193ca86d9558bc895849265d2bbd/containe
rs/etcd/486736f1\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","pid":2476,"status":"running","bundle":"/run/containers/storage/overlay-co
ntainers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata","rootfs":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc79466722b3777db9c24dd3c63a849026ee706e/merged","created":"2021-07-08T23:43:58.920322142Z","annotations":{"app":"kindnet","controller-revision-hash":"694b6fb659","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.220124726Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.842364249Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":
"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kindnet-589hd","io.kubernetes.cri-o.Labels":"{\"app\":\"kindnet\",\"pod-template-generation\":\"1\",\"controller-revision-hash\":\"694b6fb659\",\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"tier\":\"node\",\"k8s-app\":\"kindnet\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-589hd\",\"uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e2d756f3e3d21bd67765fd6dc7946672
2b3777db9c24dd3c63a849026ee706e/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/shm","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"55f424f0-d7a4-418f-8572-27041384f3ba","k8s-app":"kindnet","ku
bernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","pid":2554,"status":"running","bundle":"/run/containers/storage/overlay-containers/7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a/userdata","rootfs":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","created":"2021-07-08T23:43:59.140412019Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"73cb1b1","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"73cb1b1\",\"io.kub
ernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.029318377Z","io.kubernetes.cri-o.Image":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.21.2","io.kubernetes.cri-o.ImageRef":"d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e
2c-5d4d-4e26-9d87-bfe3d4715985/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/462c6688ce1d023d5df1b74afd144759f5b176d71761f6bc62065141ab582bf5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"con
tainer_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/containers/kube-proxy/343cc99a\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/06346e2c-5d4d-4e26-9d87-bfe3d4715985/volumes/kubernetes.io~projected/kube-api-access-2vk7z\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","kubernetes.io/config.se
en":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","pid":1533,"status":"running","bundle":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata","rootfs":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","created":"2021-07-08T23:43:27.9820761Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"2349193ca86d9558bc895849265d2bbd\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.58.2:2379\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463758229Z\",\"kubernetes.io/config.source\":\"file\"}","io.kub
ernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.685254972Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"etcd-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"2349193ca86d9558bc895849265d2bbd\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-20210708233938-257783\",\"io.kubernetes.c
ontainer.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-20210708233938-257783_2349193ca86d9558bc895849265d2bbd/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-20210708233938-257783\",\"uid\":\"2349193ca86d9558bc895849265d2bbd\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72dc08e222ab55d43eaea1871cbcf2481a5b6ed4398bc531f5b83c9c2bf82abc/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-20210708233938-257783_kube-system_2349193ca86d9558bc895849265d2bbd_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeH
andler":"","io.kubernetes.cri-o.SandboxID":"7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3/userdata/shm","io.kubernetes.pod.name":"etcd-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2349193ca86d9558bc895849265d2bbd","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2349193ca86d9558bc895849265d2bbd","kubernetes.io/config.seen":"2021-07-08T23:43:23.463758229Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","pid":1526,"status":"running","bundle":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd0
02fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata","rootfs":"/var/lib/containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","created":"2021-07-08T23:43:28.02245442Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"636f853856e082c029b85fb89a036300\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463757039Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.680724389Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true"
,"io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"636f853856e082c029b85fb89a036300\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-20210708233938-257783\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-20210708233938-257783_636f853856e082c029b85fb89a036300/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-20210708233938-257783\",\"uid\":\"636f853856e082c029b85fb89a036300\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/
containers/storage/overlay/f9500faec4678f8eedd7e562c4634c3983eea5a8367363ee2114993ba2617eb9/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-20210708233938-257783_kube-system_636f853856e082c029b85fb89a036300_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-20210708233938-257783","io.kubernetes.p
od.namespace":"kube-system","io.kubernetes.pod.uid":"636f853856e082c029b85fb89a036300","kubernetes.io/config.hash":"636f853856e082c029b85fb89a036300","kubernetes.io/config.seen":"2021-07-08T23:43:23.463757039Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","pid":2536,"status":"running","bundle":"/run/containers/storage/overlay-containers/aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e/userdata","rootfs":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","created":"2021-07-08T23:43:59.094903496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"42880ebe","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"42880ebe\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:59.000934542Z","io.kubernetes.cri-o.Image":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-589hd\",\"io.kubernetes.pod.namespace\":\"kube-system\"
,\"io.kubernetes.pod.uid\":\"55f424f0-d7a4-418f-8572-27041384f3ba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-589hd_55f424f0-d7a4-418f-8572-27041384f3ba/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/196f295295a6ebd45ec80ca7af0769b45f724efb7c52e5a54faf0894d74b8486/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-589hd_kube-system_55f424f0-d7a4-418f-8572-27041384f3ba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":
"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/containers/kindnet-cni/63efdea9\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/55f424f0-d7a4-418f-8572-27041384f3ba/volumes/kubernetes.io~projected/kube-api-access-vxfqs\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-589hd","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"55f424f0-d7
a4-418f-8572-27041384f3ba","kubernetes.io/config.seen":"2021-07-08T23:43:58.220124726Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","pid":3117,"status":"running","bundle":"/run/containers/storage/overlay-containers/b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7/userdata","rootfs":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","created":"2021-07-08T23:44:44.929527419Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3ba99b8a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3ba99b8a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:44:44.869981068Z","io.kubernetes.cr
i-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns/coredns:v1.8.0","io.kubernetes.cri-o.ImageRef":"1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ResolvPath":"/run/co
ntainers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SandboxName":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/containers/coredns/ebcb451b\",\"readonly\":false},{\"contai
ner_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cd8ce294-9dba-4d2e-8793-cc0862414323/volumes/kubernetes.io~projected/kube-api-access-wjk4b\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","pid":3088,"status":"running","bundle":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata","rootfs":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged",
"created":"2021-07-08T23:44:44.819414214Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:44:44.378304571Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"vethad721594\",\"mac\":\"fa:ff:ad:c6:25:66\"},{\"name\":\"eth0\",\"mac\":\"22:75:6a:ff:8f:5c\",\"sandbox\":\"/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T2
3:44:44.69371705Z","io.kubernetes.cri-o.HostName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"coredns-558bd4d5db-mnwpk","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.name\":\"coredns-558bd4d5db-mnwpk\",\"pod-template-hash\":\"558bd4d5db\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-558bd4d5db-mnwpk_cd8ce294-9dba-4d2e-8793-cc0862414323/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-558bd4d5db-mnwpk\",\"uid\":\"cd8ce294-9dba-4d2e-8793-cc0862414323\",\"namespace\":\
"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7c390972a7ebbdd53365500f2760439b1c797f16f323006acdd93709af97278c/merged","io.kubernetes.cri-o.Name":"k8s_coredns-558bd4d5db-mnwpk_kube-system_cd8ce294-9dba-4d2e-8793-cc0862414323_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f/userdata/shm","io.kubernetes.pod.name":"coredns-558bd4d5db-mnwpk","
io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"cd8ce294-9dba-4d2e-8793-cc0862414323","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-07-08T23:44:44.378304571Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"558bd4d5db"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","pid":1483,"status":"running","bundle":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata","rootfs":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","created":"2021-07-08T23:43:27.88584733Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-07-08T23:43:23.463755710Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes
.io/config.hash\":\"c0a79d1d801cddeaa32444663181957f\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:27.588501479Z","io.kubernetes.cri-o.HostName":"pause-20210708233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"c0a79d1d801cddeaa32444663181957f\",\"io.kubernetes.container.name\":\"
POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-20210708233938-257783\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-20210708233938-257783_c0a79d1d801cddeaa32444663181957f/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-pause-20210708233938-257783\",\"uid\":\"c0a79d1d801cddeaa32444663181957f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efa90599834890f8bd3da27a3a749f188e95537431bf95c2fbbda75a1a376820/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-20210708233938-257783_kube-system_c0a79d1d801cddeaa32444663181957f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"tr
ue","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-20210708233938-257783","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.hash":"c0a79d1d801cddeaa32444663181957f","kubernetes.io/config.seen":"2021-07-08T23:43:23.463755710Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"edb6f1460db485be501f94018d5caf7a
576fdd2e67b51c15322cf821191a0ebb","pid":2500,"status":"running","bundle":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata","rootfs":"/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","created":"2021-07-08T23:43:58.96207639Z","annotations":{"controller-revision-hash":"6896ccdc5","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-07-08T23:43:58.246007990Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-07-08T23:43:58.878275037Z","io.kubernetes.cri-o.HostName":"pause-202107
08233938-257783","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.4.1","io.kubernetes.cri-o.KubeName":"kube-proxy-rb2ws","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-rb2ws\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"6896ccdc5\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-rb2ws_06346e2c-5d4d-4e26-9d87-bfe3d4715985/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-rb2ws\",\"uid\":\"06346e2c-5d4d-4e26-9d87-bfe3d4715985\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"
/var/lib/containers/storage/overlay/621192df224b4f253243649a866cb69454571da103b6d3f3b1234d53c88440fd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-rb2ws_kube-system_06346e2c-5d4d-4e26-9d87-bfe3d4715985_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb/userdata/shm","io.kubernetes.pod.name":"kube-proxy-rb2ws","io.kubernetes.pod.namespace":"kube-system","io.kuberne
tes.pod.uid":"06346e2c-5d4d-4e26-9d87-bfe3d4715985","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-07-08T23:43:58.246007990Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","pid":1608,"status":"running","bundle":"/run/containers/storage/overlay-containers/f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c/userdata","rootfs":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","created":"2021-07-08T23:43:28.292409803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"44b38584","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o
.Annotations":"{\"io.kubernetes.container.hash\":\"44b38584\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-07-08T23:43:28.165591981Z","io.kubernetes.cri-o.Image":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.21.2","io.kubernetes.cri-o.ImageRef":"2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-20210708233938-257783\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"48a917795140826e0
af6da63b039926b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-20210708233938-257783_48a917795140826e0af6da63b039926b/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/acf62325b91e31207f04e3f39616a0820b0809fa7e55c2b2ce5eaf30b7367ddc/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-20210708233938-257783_kube-system_48a917795140826e0af6da63b039926b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":
"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/containers/kube-apiserver/141310e0\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/48a917795140826e0af6da63b039926b/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-pause-20210708233938-257783","io.kubernetes.pod.nam
espace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"48a917795140826e0af6da63b039926b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48a917795140826e0af6da63b039926b","kubernetes.io/config.seen":"2021-07-08T23:43:23.463729331Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0708 23:44:50.889436  383102 cri.go:113] list returned 14 containers
	I0708 23:44:50.889463  383102 cri.go:116] container: {ID:0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef Status:running}
	I0708 23:44:50.889494  383102 cri.go:122] skipping {0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef running}: state = "running", want "paused"
	I0708 23:44:50.889513  383102 cri.go:116] container: {ID:153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 Status:running}
	I0708 23:44:50.889538  383102 cri.go:118] skipping 153d3d24ac6ae0b1319b18921ba52c6f9e6a0c5a86bfd023a6397dd35cf1a3f4 - not in ps
	I0708 23:44:50.889556  383102 cri.go:116] container: {ID:66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e Status:running}
	I0708 23:44:50.889571  383102 cri.go:122] skipping {66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e running}: state = "running", want "paused"
	I0708 23:44:50.889587  383102 cri.go:116] container: {ID:76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 Status:running}
	I0708 23:44:50.889601  383102 cri.go:122] skipping {76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41 running}: state = "running", want "paused"
	I0708 23:44:50.889626  383102 cri.go:116] container: {ID:79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 Status:running}
	I0708 23:44:50.889644  383102 cri.go:118] skipping 79814c347cb14b5bc9605d60aeb3d9d5bef8c87c45fd16dd00a44923ee57dea2 - not in ps
	I0708 23:44:50.889657  383102 cri.go:116] container: {ID:7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a Status:running}
	I0708 23:44:50.889671  383102 cri.go:122] skipping {7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a running}: state = "running", want "paused"
	I0708 23:44:50.889687  383102 cri.go:116] container: {ID:7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 Status:running}
	I0708 23:44:50.889711  383102 cri.go:118] skipping 7df3a2be1b33d7728220225574ed49527e612ea908d2860fe4b136e49e94a4d3 - not in ps
	I0708 23:44:50.889726  383102 cri.go:116] container: {ID:98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 Status:running}
	I0708 23:44:50.889739  383102 cri.go:118] skipping 98331c8576b70dd56fd002fd7902e5a3088981cd502522a0eb1245dcb2e7d957 - not in ps
	I0708 23:44:50.889751  383102 cri.go:116] container: {ID:aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e Status:running}
	I0708 23:44:50.889763  383102 cri.go:122] skipping {aa26d8524150c850f5291c2edfd1d323d9497aeb01b5ca3a72fe49c14fc3ea3e running}: state = "running", want "paused"
	I0708 23:44:50.889786  383102 cri.go:116] container: {ID:b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 Status:running}
	I0708 23:44:50.889802  383102 cri.go:122] skipping {b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7 running}: state = "running", want "paused"
	I0708 23:44:50.889816  383102 cri.go:116] container: {ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f Status:running}
	I0708 23:44:50.889831  383102 cri.go:118] skipping ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f - not in ps
	I0708 23:44:50.889844  383102 cri.go:116] container: {ID:ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe Status:running}
	I0708 23:44:50.889868  383102 cri.go:118] skipping ebf106620bd16f0ed87b00d4b69ae33372428930465cc76f4f5d55c3d07302fe - not in ps
	I0708 23:44:50.889884  383102 cri.go:116] container: {ID:edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb Status:running}
	I0708 23:44:50.889899  383102 cri.go:118] skipping edb6f1460db485be501f94018d5caf7a576fdd2e67b51c15322cf821191a0ebb - not in ps
	I0708 23:44:50.889910  383102 cri.go:116] container: {ID:f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c Status:running}
	I0708 23:44:50.889924  383102 cri.go:122] skipping {f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c running}: state = "running", want "paused"
	I0708 23:44:50.889976  383102 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0708 23:44:50.896457  383102 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0708 23:44:50.896471  383102 kubeadm.go:600] restartCluster start
	I0708 23:44:50.896504  383102 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0708 23:44:50.901607  383102 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0708 23:44:50.902345  383102 kubeconfig.go:93] found "pause-20210708233938-257783" server: "https://192.168.58.2:8443"
	I0708 23:44:50.902810  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.904266  383102 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0708 23:44:50.910038  383102 api_server.go:164] Checking apiserver status ...
	I0708 23:44:50.910093  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:50.921551  383102 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	I0708 23:44:50.927266  383102 api_server.go:180] apiserver freezer: "11:freezer:/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope"
	I0708 23:44:50.927324  383102 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/system.slice/crio-f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c.scope/freezer.state
	I0708 23:44:50.932380  383102 api_server.go:202] freezer state: "THAWED"
	I0708 23:44:50.932400  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:50.940647  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:50.968340  383102 system_pods.go:86] 7 kube-system pods found
	I0708 23:44:50.968365  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:50.968372  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:50.968381  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:50.968389  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:50.968394  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:50.968404  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:50.968409  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:50.969071  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:50.969091  383102 kubeadm.go:594] The running cluster does not require reconfiguration: 192.168.58.2
	I0708 23:44:50.969100  383102 kubeadm.go:647] Taking a shortcut, as the cluster seems to be properly configured
	I0708 23:44:50.969105  383102 kubeadm.go:604] restartCluster took 72.629672ms
	I0708 23:44:50.969114  383102 kubeadm.go:392] StartCluster complete in 143.022344ms
	I0708 23:44:50.969124  383102 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.969188  383102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:44:50.969783  383102 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0708 23:44:50.970369  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:50.973359  383102 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210708233938-257783" rescaled to 1
	I0708 23:44:50.973409  383102 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0708 23:44:50.977036  383102 out.go:165] * Verifying Kubernetes components...
	I0708 23:44:50.977080  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:50.973644  383102 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0708 23:44:50.973655  383102 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0708 23:44:50.977189  383102 addons.go:59] Setting storage-provisioner=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977229  383102 addons.go:135] Setting addon storage-provisioner=true in "pause-20210708233938-257783"
	W0708 23:44:50.977246  383102 addons.go:147] addon storage-provisioner should already be in state true
	I0708 23:44:50.977293  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:50.977346  383102 addons.go:59] Setting default-storageclass=true in profile "pause-20210708233938-257783"
	I0708 23:44:50.977366  383102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210708233938-257783"
	I0708 23:44:50.977642  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:50.977846  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.040750  383102 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0708 23:44:51.040845  383102 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.040854  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0708 23:44:51.040902  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.059995  383102 kapi.go:59] client config for pause-20210708233938-257783: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/pause-20210708233938-257783/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1113600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0708 23:44:51.063879  383102 addons.go:135] Setting addon default-storageclass=true in "pause-20210708233938-257783"
	W0708 23:44:51.063911  383102 addons.go:147] addon default-storageclass should already be in state true
	I0708 23:44:51.063955  383102 host.go:66] Checking if "pause-20210708233938-257783" exists ...
	I0708 23:44:51.064454  383102 cli_runner.go:115] Run: docker container inspect pause-20210708233938-257783 --format={{.State.Status}}
	I0708 23:44:51.120151  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.133089  383102 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0708 23:44:51.133129  383102 node_ready.go:35] waiting up to 6m0s for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144796  383102 node_ready.go:49] node "pause-20210708233938-257783" has status "Ready":"True"
	I0708 23:44:51.144810  383102 node_ready.go:38] duration metric: took 11.663188ms waiting for node "pause-20210708233938-257783" to be "Ready" ...
	I0708 23:44:51.144817  383102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.151821  383102 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.151836  383102 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0708 23:44:51.151881  383102 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20210708233938-257783
	I0708 23:44:51.162008  383102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178412  383102 pod_ready.go:92] pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.178425  383102 pod_ready.go:81] duration metric: took 16.393726ms waiting for pod "coredns-558bd4d5db-mnwpk" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.178434  383102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182215  383102 pod_ready.go:92] pod "etcd-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.182231  383102 pod_ready.go:81] duration metric: took 3.790081ms waiting for pod "etcd-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.182242  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185941  383102 pod_ready.go:92] pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.185957  383102 pod_ready.go:81] duration metric: took 3.703058ms waiting for pod "kube-apiserver-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.185966  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193311  383102 pod_ready.go:92] pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.193326  383102 pod_ready.go:81] duration metric: took 7.350387ms waiting for pod "kube-controller-manager-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.193335  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.199623  383102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49617 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/pause-20210708233938-257783/id_rsa Username:docker}
	I0708 23:44:51.228409  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0708 23:44:51.289804  383102 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0708 23:44:51.544987  383102 pod_ready.go:92] pod "kube-proxy-rb2ws" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.545034  383102 pod_ready.go:81] duration metric: took 351.691462ms waiting for pod "kube-proxy-rb2ws" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.545056  383102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.611304  383102 out.go:165] * Enabled addons: storage-provisioner, default-storageclass
	I0708 23:44:51.611327  383102 addons.go:344] enableAddons completed in 637.673923ms
	I0708 23:44:51.944191  383102 pod_ready.go:92] pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace has status "Ready":"True"
	I0708 23:44:51.944240  383102 pod_ready.go:81] duration metric: took 399.15943ms waiting for pod "kube-scheduler-pause-20210708233938-257783" in "kube-system" namespace to be "Ready" ...
	I0708 23:44:51.944260  383102 pod_ready.go:38] duration metric: took 799.430802ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0708 23:44:51.944284  383102 api_server.go:50] waiting for apiserver process to appear ...
	I0708 23:44:51.944353  383102 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:44:51.962521  383102 api_server.go:70] duration metric: took 989.086682ms to wait for apiserver process to appear ...
	I0708 23:44:51.962540  383102 api_server.go:86] waiting for apiserver healthz status ...
	I0708 23:44:51.962549  383102 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0708 23:44:51.976017  383102 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0708 23:44:51.976872  383102 api_server.go:139] control plane version: v1.21.2
	I0708 23:44:51.976889  383102 api_server.go:129] duration metric: took 14.342835ms to wait for apiserver health ...
	I0708 23:44:51.976896  383102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0708 23:44:52.147101  383102 system_pods.go:59] 8 kube-system pods found
	I0708 23:44:52.147126  383102 system_pods.go:61] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.147132  383102 system_pods.go:61] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.147156  383102 system_pods.go:61] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.147170  383102 system_pods.go:61] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.147175  383102 system_pods.go:61] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.147180  383102 system_pods.go:61] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.147188  383102 system_pods.go:61] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.147196  383102 system_pods.go:61] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.147205  383102 system_pods.go:74] duration metric: took 170.300522ms to wait for pod list to return data ...
	I0708 23:44:52.147214  383102 default_sa.go:34] waiting for default service account to be created ...
	I0708 23:44:52.344080  383102 default_sa.go:45] found service account: "default"
	I0708 23:44:52.344097  383102 default_sa.go:55] duration metric: took 196.867452ms for default service account to be created ...
	I0708 23:44:52.344104  383102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0708 23:44:52.546575  383102 system_pods.go:86] 8 kube-system pods found
	I0708 23:44:52.546597  383102 system_pods.go:89] "coredns-558bd4d5db-mnwpk" [cd8ce294-9dba-4d2e-8793-cc0862414323] Running
	I0708 23:44:52.546603  383102 system_pods.go:89] "etcd-pause-20210708233938-257783" [7b2c684b-c74d-4d6f-8e10-cba2c125105a] Running
	I0708 23:44:52.546608  383102 system_pods.go:89] "kindnet-589hd" [55f424f0-d7a4-418f-8572-27041384f3ba] Running
	I0708 23:44:52.546614  383102 system_pods.go:89] "kube-apiserver-pause-20210708233938-257783" [98d04646-9628-4fa5-b9dc-3748b16f6c82] Running
	I0708 23:44:52.546619  383102 system_pods.go:89] "kube-controller-manager-pause-20210708233938-257783" [cf3dfc18-c291-4fe8-be4b-22a8ba04c742] Running
	I0708 23:44:52.546624  383102 system_pods.go:89] "kube-proxy-rb2ws" [06346e2c-5d4d-4e26-9d87-bfe3d4715985] Running
	I0708 23:44:52.546629  383102 system_pods.go:89] "kube-scheduler-pause-20210708233938-257783" [f2df3e04-125e-455b-b787-f607f1809abf] Running
	I0708 23:44:52.546638  383102 system_pods.go:89] "storage-provisioner" [939f2223-21e0-4e8d-8f43-fd8f9cc992b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0708 23:44:52.546644  383102 system_pods.go:126] duration metric: took 202.535502ms to wait for k8s-apps to be running ...
	I0708 23:44:52.546651  383102 system_svc.go:44] waiting for kubelet service to be running ....
	I0708 23:44:52.546691  383102 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:44:52.554858  383102 system_svc.go:56] duration metric: took 8.204667ms WaitForService to wait for kubelet.
	I0708 23:44:52.554876  383102 kubeadm.go:547] duration metric: took 1.581445531s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0708 23:44:52.554910  383102 node_conditions.go:102] verifying NodePressure condition ...
	I0708 23:44:52.744446  383102 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0708 23:44:52.744473  383102 node_conditions.go:123] node cpu capacity is 2
	I0708 23:44:52.744486  383102 node_conditions.go:105] duration metric: took 189.57062ms to run NodePressure ...
	I0708 23:44:52.744495  383102 start.go:225] waiting for startup goroutines ...
	I0708 23:44:52.795296  383102 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
	I0708 23:44:52.798688  383102 out.go:165] * Done! kubectl is now configured to use "pause-20210708233938-257783" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:45:06 UTC. --
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752367414Z" level=info msg="Got pod network &{Name:coredns-558bd4d5db-mnwpk Namespace:kube-system ID:ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f NetNS:/var/run/netns/ec4d5ca9-9e24-41d3-8013-97d3a7a811bd Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.752537333Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851068596Z" level=info msg="Ran pod sandbox ded1c1360c40746b3f7af57c9a40eb4b8c573c7336642fbea2e01dee7e5df96f with infra container: kube-system/coredns-558bd4d5db-mnwpk/POD" id=c9132cb2-089f-4563-8891-94bd70e68b31 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.851819090Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.852397777Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=535e481f-c9b5-4da0-888e-28da677e78c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855119319Z" level=info msg="Checking image status: k8s.gcr.io/coredns/coredns:v1.8.0" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.855626237Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:919b800fed6eaf6c9a55c3017c0aa3187bfe5d81abefbe49bb27f968458b94cc k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:39402464,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c226c346-9d15-4dc7-8640-d22668769349 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.856387143Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870099418Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.870133896Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a927d3a65ba2694d404e8c21ba83376ce6de4ed50019e5309df0f927766db32d/merged/etc/group: no such file or directory"
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944318612Z" level=info msg="Created container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=9ffe0ac0-8fdf-4cab-92cd-d96e15acb1f8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.944919240Z" level=info msg="Starting container: b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:44 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:44.955103274Z" level=info msg="Started container b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7: kube-system/coredns-558bd4d5db-mnwpk/coredns" id=99de0758-72ab-4e9c-b175-4fef1b41793e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:51 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:51.912211778Z" level=info msg="Running pod sandbox: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.048464048Z" level=info msg="Ran pod sandbox 049e6b5335b3d37bd7b1f71f526dfb38a2146de747c5333e44dd562b58da320c with infra container: kube-system/storage-provisioner/POD" id=3fe405bf-c337-430c-ba8b-4acaabc95cf2 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049231829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.049808794Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b3e22441-f73c-48a3-b70b-8df95e9c6a80 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.050512018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051005283Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6d37e3ff-30d9-415f-ab55-83d772199ce8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.051652721Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065018823Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/passwd: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.065122749Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2db347e4f0d5d5b51e801807bc8894287c0f6d7b8ece1a922cadd38989584d2d/merged/etc/group: no such file or directory"
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.130702118Z" level=info msg="Created container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=cbb3b7bb-00a6-411a-970c-153e4e488ad5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.131445227Z" level=info msg="Starting container: ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 08 23:44:52 pause-20210708233938-257783 crio[517]: time="2021-07-08 23:44:52.141586857Z" level=info msg="Started container ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf: kube-system/storage-provisioner/storage-provisioner" id=5736abf9-7c6e-4d2d-99f4-1b9d9b3933f2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	ebc191d78d332       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   14 seconds ago       Running             storage-provisioner       0                   049e6b5335b3d
	b7d6404120fcb       1a1f05a2cd7c2fbfa7b45b21128c8a4880c003ca482460081dc12d76bfa863e8   22 seconds ago       Running             coredns                   0                   ded1c1360c407
	7ca432c9b0953       d7b0c9b45d678fc9500baf09a783a4b147cc4e0df0c71c6e2d26db3a86ac5105   About a minute ago   Running             kube-proxy                0                   edb6f1460db48
	aa26d8524150c       f37b7c809e5dcc2090371f933f7acb726bb1bffd5652980d2e1d7e2eff5cd301   About a minute ago   Running             kindnet-cni               0                   79814c347cb14
	0cb308b9b448f       9a5764dc4d8eb9377dcf5315ad0220bc5686497464f0157e93ed90dc97bf1630   About a minute ago   Running             kube-controller-manager   0                   ebf106620bd16
	66d5fee706a3d       ed7155fa5d9d8d10e852a1894855c6e6fe12b968f3dbdd00622a9d0591360bc4   About a minute ago   Running             kube-scheduler            0                   98331c8576b70
	76999b0177398       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28   About a minute ago   Running             etcd                      0                   7df3a2be1b33d
	f275fc53ae00f       2811c599675e449c5a00a8f27a49c07b9ca53c01d607f1e3c0160a17d66863c0   About a minute ago   Running             kube-apiserver            0                   153d3d24ac6ae
	
	* 
	* ==> coredns [b7d6404120fcb54c69510460c074f37d071db54f7aa2f5b9b3154bd86eef20d7] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7cb80d9b13c0af3fa1ba04fc3eef5f89
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210708233938-257783
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-20210708233938-257783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5
	                    minikube.k8s.io/name=pause-20210708233938-257783
	                    minikube.k8s.io/updated_at=2021_07_08T23_43_45_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 08 Jul 2021 23:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210708233938-257783
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 08 Jul 2021 23:45:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:43:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 08 Jul 2021 23:44:39 +0000   Thu, 08 Jul 2021 23:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    pause-20210708233938-257783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8034928Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                06c382d0-5723-4c28-97d9-2bf95fc86b49
	  Boot ID:                    7cbe50af-3171-4d81-8fca-78216a04984f
	  Kernel Version:             5.8.0-1038-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.2
	  Kube-Proxy Version:         v1.21.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-mnwpk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     69s
	  kube-system                 etcd-pause-20210708233938-257783                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kindnet-589hd                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      69s
	  kube-system                 kube-apiserver-pause-20210708233938-257783             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-pause-20210708233938-257783    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-rb2ws                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-pause-20210708233938-257783             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  100s (x8 over 101s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x7 over 101s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x7 over 101s)  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 79s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s                  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                  kubelet     Node pause-20210708233938-257783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s                  kubelet     Node pause-20210708233938-257783 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                28s                  kubelet     Node pause-20210708233938-257783 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000671] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000514] FS-Cache: N-cookie c=00000000e6b84f6b [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=0000000052778918 n=000000009967b9dc
	[  +0.000663] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +0.001810] FS-Cache: Duplicate cookie detected
	[  +0.000530] FS-Cache: O-cookie c=0000000057c7fc1d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=0000000052778918 n=00000000efae32c9
	[  +0.000673] FS-Cache: O-key=[8] '77e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=00000000f56d3f5d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000863] FS-Cache: N-cookie d=0000000052778918 n=00000000e997ef03
	[  +0.000702] FS-Cache: N-key=[8] '77e60b0000000000'
	[  +1.187985] FS-Cache: Duplicate cookie detected
	[  +0.000541] FS-Cache: O-cookie c=000000000ea7a21c [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000903] FS-Cache: O-cookie d=0000000052778918 n=00000000f7f72a4b
	[  +0.000697] FS-Cache: O-key=[8] '76e60b0000000000'
	[  +0.000532] FS-Cache: N-cookie c=00000000dc14d28d [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=0000000052778918 n=00000000fd1ba8e6
	[  +0.000719] FS-Cache: N-key=[8] '76e60b0000000000'
	[  +0.299966] FS-Cache: Duplicate cookie detected
	[  +0.000563] FS-Cache: O-cookie c=00000000b39eb93d [p=00000000e0c1ccf3 fl=226 nc=0 na=1]
	[  +0.000913] FS-Cache: O-cookie d=0000000052778918 n=00000000654c5f24
	[  +0.000696] FS-Cache: O-key=[8] '79e60b0000000000'
	[  +0.000542] FS-Cache: N-cookie c=000000004dd4c5bf [p=00000000e0c1ccf3 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=0000000052778918 n=000000008dfb704a
	[  +0.000684] FS-Cache: N-key=[8] '79e60b0000000000'
	
	* 
	* ==> etcd [76999b0177398403eac252ffc1ced5a941ee32f7abf44bbed40f0c18085efc41] <==
	* 2021-07-08 23:43:29.407011 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-07-08 23:43:29.407103 I | etcdserver/api: enabled capabilities for version 3.4
	2021-07-08 23:43:29.407157 I | etcdserver: published {Name:pause-20210708233938-257783 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-07-08 23:43:29.407465 I | embed: ready to serve client requests
	2021-07-08 23:43:29.415509 I | embed: serving client requests on 127.0.0.1:2379
	2021-07-08 23:43:29.423096 I | embed: ready to serve client requests
	2021-07-08 23:43:29.424420 I | embed: serving client requests on 192.168.58.2:2379
	2021-07-08 23:43:38.896326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:39.824755 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:heapster\" " with result "range_response_count:0 size:4" took too long (132.781472ms) to execute
	2021-07-08 23:43:40.062517 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (126.484476ms) to execute
	2021-07-08 23:43:40.062723 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:node-bootstrapper\" " with result "range_response_count:0 size:4" took too long (157.087895ms) to execute
	2021-07-08 23:43:41.406099 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:kube-scheduler\" " with result "range_response_count:0 size:5" took too long (106.875165ms) to execute
	2021-07-08 23:43:41.406344 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210708233938-257783\" " with result "range_response_count:1 size:5706" took too long (100.988866ms) to execute
	2021-07-08 23:43:41.800497 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler\" " with result "range_response_count:0 size:5" took too long (104.221848ms) to execute
	2021-07-08 23:43:42.415075 W | etcdserver: read-only range request "key:\"/registry/roles/kube-public/system:controller:bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (113.749552ms) to execute
	2021-07-08 23:43:42.790083 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (139.951306ms) to execute
	2021-07-08 23:43:42.790797 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (104.526083ms) to execute
	2021-07-08 23:43:55.482711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:43:58.854149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:08.855081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:18.853976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:28.854207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:38.854850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:48.854356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-07-08 23:44:58.855084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:45:07 up  2:27,  0 users,  load average: 3.69, 2.87, 1.90
	Linux pause-20210708233938-257783 5.8.0-1038-aws #40~20.04.1-Ubuntu SMP Thu Jun 17 13:20:15 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [f275fc53ae00f68f392c9ba8ab2d610ba701bb6137e8bdad9d26c185874b563c] <==
	* I0708 23:43:38.659980       1 cache.go:39] Caches are synced for autoregister controller
	I0708 23:43:38.660019       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0708 23:43:38.765817       1 controller.go:611] quota admission added evaluator for: namespaces
	I0708 23:43:39.399647       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0708 23:43:39.399669       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0708 23:43:39.413314       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0708 23:43:39.428902       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0708 23:43:39.428920       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0708 23:43:42.417829       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0708 23:43:42.615824       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0708 23:43:42.951365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0708 23:43:42.952308       1 controller.go:611] quota admission added evaluator for: endpoints
	I0708 23:43:42.961082       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0708 23:43:44.101667       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0708 23:43:44.674905       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0708 23:43:44.719032       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0708 23:43:48.275800       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0708 23:43:58.042264       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0708 23:43:58.285534       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0708 23:44:04.479561       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:04.479599       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:04.479606       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0708 23:44:35.434880       1 client.go:360] parsed scheme: "passthrough"
	I0708 23:44:35.434919       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0708 23:44:35.434927       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [0cb308b9b448f4fe81673c8f7f196504d6673259b6201819fabe01e79226c9ef] <==
	* I0708 23:43:57.880761       1 shared_informer.go:247] Caches are synced for HPA 
	I0708 23:43:57.880837       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0708 23:43:57.907676       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0708 23:43:57.908756       1 shared_informer.go:247] Caches are synced for endpoint 
	I0708 23:43:57.952724       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:57.956872       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210708233938-257783" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0708 23:43:58.003743       1 shared_informer.go:247] Caches are synced for deployment 
	I0708 23:43:58.061826       1 shared_informer.go:247] Caches are synced for disruption 
	I0708 23:43:58.061841       1 disruption.go:371] Sending events to api server.
	I0708 23:43:58.113116       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0708 23:43:58.121312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.138517       1 shared_informer.go:247] Caches are synced for resource quota 
	I0708 23:43:58.160698       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-589hd"
	I0708 23:43:58.238962       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rb2ws"
	I0708 23:43:58.288443       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	E0708 23:43:58.326235       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"85756639-7788-414f-aae2-a95c8ac59acd", ResourceVersion:"309", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761384625, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d528a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000d528b8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001394920), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d528e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d52900), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394940)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001394980)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014b4240), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f18168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a56700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400135e8f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f181b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0708 23:43:58.358158       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-xtvks"
	I0708 23:43:58.384252       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-mnwpk"
	I0708 23:43:58.532367       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551775       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0708 23:43:58.551796       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0708 23:43:58.636207       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0708 23:43:58.654856       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-xtvks"
	I0708 23:44:42.867632       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [7ca432c9b09536ac66cf25a0ec17f8c36a84ab651260b751a69ef39f9788f51a] <==
	* I0708 23:43:59.522352       1 node.go:172] Successfully retrieved node IP: 192.168.58.2
	I0708 23:43:59.522418       1 server_others.go:140] Detected node IP 192.168.58.2
	W0708 23:43:59.522436       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0708 23:43:59.592863       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0708 23:43:59.592891       1 server_others.go:212] Using iptables Proxier.
	I0708 23:43:59.592900       1 server_others.go:219] creating dualStackProxier for iptables.
	W0708 23:43:59.592910       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0708 23:43:59.593168       1 server.go:643] Version: v1.21.2
	I0708 23:43:59.593489       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	I0708 23:43:59.593530       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	I0708 23:43:59.594089       1 config.go:315] Starting service config controller
	I0708 23:43:59.594140       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0708 23:43:59.594778       1 config.go:224] Starting endpoint slice config controller
	I0708 23:43:59.594818       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0708 23:43:59.596985       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0708 23:43:59.598797       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0708 23:43:59.695058       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0708 23:43:59.695065       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [66d5fee706a3d37514ade6fc60e464bae90dc029b87fbc25be4abb4197f5f58e] <==
	* E0708 23:43:38.663259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0708 23:43:38.663881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:38.663930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:38.663980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:38.664026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:38.664104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:38.664153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:38.667689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.506225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0708 23:43:39.684692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0708 23:43:39.707077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:39.715815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0708 23:43:39.739475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0708 23:43:39.927791       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0708 23:43:39.950708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0708 23:43:40.026534       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0708 23:43:40.052611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.106259       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.125654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0708 23:43:40.138747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0708 23:43:40.200954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0708 23:43:40.246523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0708 23:43:42.914398       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-07-08 23:42:57 UTC, end at Thu 2021-07-08 23:45:07 UTC. --
	Jul 08 23:44:13 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:13.913137    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:18 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:18.914320    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:19 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:19.141178    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:23 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:23.915497    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:28 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:28.916031    2084 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 08 23:44:29 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:29.195302    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:39 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:39.262592    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.378521    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439076    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd8ce294-9dba-4d2e-8793-cc0862414323-config-volume\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:44 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:44.439122    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk4b\" (UniqueName: \"kubernetes.io/projected/cd8ce294-9dba-4d2e-8793-cc0862414323-kube-api-access-wjk4b\") pod \"coredns-558bd4d5db-mnwpk\" (UID: \"cd8ce294-9dba-4d2e-8793-cc0862414323\") "
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:49.318435    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:44:49 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:49.796926    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.049924    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:50 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:50.459815    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.106860    2084 container.go:586] Failed to update stats for container "/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7": /sys/fs/cgroup/cpuset/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/cpuset.cpus found to be empty, continuing to push stats
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:51.127776    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.610872    2084 topology_manager.go:187] "Topology Admit Handler"
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679061    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndmf\" (UniqueName: \"kubernetes.io/projected/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-kube-api-access-6ndmf\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:51 pause-20210708233938-257783 kubelet[2084]: I0708 23:44:51.679129    2084 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/939f2223-21e0-4e8d-8f43-fd8f9cc992b8-tmp\") pod \"storage-provisioner\" (UID: \"939f2223-21e0-4e8d-8f43-fd8f9cc992b8\") "
	Jul 08 23:44:52 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:52.247269    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:53 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:53.995237    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:56 pause-20210708233938-257783 kubelet[2084]: W0708 23:44:56.896369    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:44:59 pause-20210708233938-257783 kubelet[2084]: E0708 23:44:59.371590    2084 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7/docker/9e0e986f196e9933a227a91f9d8412529f50b3dccb811ceef4a7f87d510724d7\": RecentStats: unable to find data in memory cache]"
	Jul 08 23:45:01 pause-20210708233938-257783 kubelet[2084]: W0708 23:45:01.193607    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	Jul 08 23:45:06 pause-20210708233938-257783 kubelet[2084]: W0708 23:45:06.513384    2084 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
	
	* 
	* ==> storage-provisioner [ebc191d78d332243e1f18d00bcb147100cee8664418328d8f4922bc53818ccaf] <==
	* I0708 23:44:52.156408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0708 23:44:52.170055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0708 23:44:52.170092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0708 23:44:52.181346       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0708 23:44:52.181466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	I0708 23:44:52.181651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"812885a7-6ecb-4200-9882-e4b3a6fd0939", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409 became leader
	I0708 23:44:52.282548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210708233938-257783_ab075a77-472d-4b32-8364-c37a19ea8409!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-20210708233938-257783 -n pause-20210708233938-257783
helpers_test.go:262: (dbg) Run:  kubectl --context pause-20210708233938-257783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPause/serial/PauseAgain]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context pause-20210708233938-257783 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context pause-20210708233938-257783 describe pod : exit status 1 (58.271087ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context pause-20210708233938-257783 describe pod : exit status 1
--- FAIL: TestPause/serial/PauseAgain (5.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (540.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p cilium-20210708233940-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio
E0709 00:09:57.622814  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:10:04.592703  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p cilium-20210708233940-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio: exit status 80 (9m0.648613365s)

                                                
                                                
-- stdout --
	* [cilium-20210708233940-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node cilium-20210708233940-257783 in cluster cilium-20210708233940-257783
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0709 00:09:48.667285  459847 out.go:286] Setting OutFile to fd 1 ...
	I0709 00:09:48.667440  459847 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0709 00:09:48.667459  459847 out.go:299] Setting ErrFile to fd 2...
	I0709 00:09:48.667472  459847 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0709 00:09:48.667608  459847 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0709 00:09:48.667967  459847 out.go:293] Setting JSON to false
	I0709 00:09:48.668951  459847 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10338,"bootTime":1625779051,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0709 00:09:48.669036  459847 start.go:121] virtualization:  
	I0709 00:09:48.672716  459847 out.go:165] * [cilium-20210708233940-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0709 00:09:48.674777  459847 out.go:165]   - MINIKUBE_LOCATION=11942
	I0709 00:09:48.676535  459847 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0709 00:09:48.678487  459847 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0709 00:09:48.680291  459847 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0709 00:09:48.680814  459847 driver.go:335] Setting default libvirt URI to qemu:///system
	I0709 00:09:48.734210  459847 docker.go:132] docker version: linux-20.10.7
	I0709 00:09:48.734289  459847 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0709 00:09:48.856721  459847 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:39 SystemTime:2021-07-09 00:09:48.77199843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0709 00:09:48.856887  459847 docker.go:244] overlay module found
	I0709 00:09:48.860933  459847 out.go:165] * Using the docker driver based on user configuration
	I0709 00:09:48.860950  459847 start.go:278] selected driver: docker
	I0709 00:09:48.860955  459847 start.go:751] validating driver "docker" against <nil>
	I0709 00:09:48.860969  459847 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0709 00:09:48.861007  459847 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0709 00:09:48.861018  459847 out.go:230] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0709 00:09:48.862825  459847 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0709 00:09:48.863110  459847 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0709 00:09:48.968483  459847 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:39 SystemTime:2021-07-09 00:09:48.895907228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0709 00:09:48.968587  459847 start_flags.go:261] no existing cluster config was found, will generate one from the flags 
	I0709 00:09:48.968719  459847 start_flags.go:687] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 00:09:48.968739  459847 cni.go:93] Creating CNI manager for "cilium"
	I0709 00:09:48.968747  459847 start_flags.go:270] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0709 00:09:48.968758  459847 start_flags.go:275] config:
	{Name:cilium-20210708233940-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:cilium-20210708233940-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0709 00:09:48.970908  459847 out.go:165] * Starting control plane node cilium-20210708233940-257783 in cluster cilium-20210708233940-257783
	I0709 00:09:48.970942  459847 cache.go:117] Beginning downloading kic base image for docker with crio
	I0709 00:09:48.972825  459847 out.go:165] * Pulling base image ...
	I0709 00:09:48.972844  459847 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0709 00:09:48.972869  459847 preload.go:150] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0709 00:09:48.972879  459847 cache.go:56] Caching tarball of preloaded images
	I0709 00:09:48.972994  459847 preload.go:174] Found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0709 00:09:48.973012  459847 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.2 on crio
	I0709 00:09:48.973105  459847 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/config.json ...
	I0709 00:09:48.973125  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/config.json: {Name:mk25a98ee9eb527b67f9abec6c2711b5460d36c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:09:48.973250  459847 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0709 00:09:49.014631  459847 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0709 00:09:49.014649  459847 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0709 00:09:49.014663  459847 cache.go:205] Successfully downloaded all kic artifacts
	I0709 00:09:49.014687  459847 start.go:313] acquiring machines lock for cilium-20210708233940-257783: {Name:mkc851e3b2cb17fb43c4768ada54676dabb99487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 00:09:49.014780  459847 start.go:317] acquired machines lock for "cilium-20210708233940-257783" in 75.627µs
	I0709 00:09:49.014805  459847 start.go:89] Provisioning new machine with config: &{Name:cilium-20210708233940-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:cilium-20210708233940-257783 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0709 00:09:49.014872  459847 start.go:126] createHost starting for "" (driver="docker")
	I0709 00:09:49.021093  459847 out.go:192] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0709 00:09:49.021353  459847 start.go:160] libmachine.API.Create for "cilium-20210708233940-257783" (driver="docker")
	I0709 00:09:49.021377  459847 client.go:168] LocalClient.Create starting
	I0709 00:09:49.021428  459847 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem
	I0709 00:09:49.021477  459847 main.go:130] libmachine: Decoding PEM data...
	I0709 00:09:49.021496  459847 main.go:130] libmachine: Parsing certificate...
	I0709 00:09:49.021607  459847 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem
	I0709 00:09:49.021625  459847 main.go:130] libmachine: Decoding PEM data...
	I0709 00:09:49.021638  459847 main.go:130] libmachine: Parsing certificate...
	I0709 00:09:49.022017  459847 cli_runner.go:115] Run: docker network inspect cilium-20210708233940-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0709 00:09:49.070574  459847 cli_runner.go:162] docker network inspect cilium-20210708233940-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0709 00:09:49.070635  459847 network_create.go:255] running [docker network inspect cilium-20210708233940-257783] to gather additional debugging logs...
	I0709 00:09:49.070654  459847 cli_runner.go:115] Run: docker network inspect cilium-20210708233940-257783
	W0709 00:09:49.110489  459847 cli_runner.go:162] docker network inspect cilium-20210708233940-257783 returned with exit code 1
	I0709 00:09:49.110518  459847 network_create.go:258] error running [docker network inspect cilium-20210708233940-257783]: docker network inspect cilium-20210708233940-257783: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210708233940-257783
	I0709 00:09:49.110531  459847 network_create.go:260] output of [docker network inspect cilium-20210708233940-257783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210708233940-257783
	
	** /stderr **
	I0709 00:09:49.110580  459847 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0709 00:09:49.150189  459847 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-ee7b416b9eaf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:38:2d:4c:a5}}
	I0709 00:09:49.150500  459847 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0x400000f408] misses:0}
	I0709 00:09:49.150532  459847 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0709 00:09:49.150549  459847 network_create.go:106] attempt to create docker network cilium-20210708233940-257783 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0709 00:09:49.150603  459847 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210708233940-257783
	I0709 00:09:49.300798  459847 network_create.go:90] docker network cilium-20210708233940-257783 192.168.58.0/24 created
	I0709 00:09:49.300831  459847 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20210708233940-257783" container
	I0709 00:09:49.300894  459847 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0709 00:09:49.351105  459847 cli_runner.go:115] Run: docker volume create cilium-20210708233940-257783 --label name.minikube.sigs.k8s.io=cilium-20210708233940-257783 --label created_by.minikube.sigs.k8s.io=true
	I0709 00:09:49.390935  459847 oci.go:102] Successfully created a docker volume cilium-20210708233940-257783
	I0709 00:09:49.391004  459847 cli_runner.go:115] Run: docker run --rm --name cilium-20210708233940-257783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210708233940-257783 --entrypoint /usr/bin/test -v cilium-20210708233940-257783:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0709 00:09:50.099883  459847 oci.go:106] Successfully prepared a docker volume cilium-20210708233940-257783
	W0709 00:09:50.099925  459847 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0709 00:09:50.099934  459847 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0709 00:09:50.099985  459847 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0709 00:09:50.100173  459847 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0709 00:09:50.100193  459847 kic.go:179] Starting extracting preloaded images to volume ...
	I0709 00:09:50.100239  459847 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210708233940-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0709 00:09:50.194804  459847 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210708233940-257783 --name cilium-20210708233940-257783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210708233940-257783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210708233940-257783 --network cilium-20210708233940-257783 --ip 192.168.58.2 --volume cilium-20210708233940-257783:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0709 00:09:51.068877  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Running}}
	I0709 00:09:51.125370  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Status}}
	I0709 00:09:51.194232  459847 cli_runner.go:115] Run: docker exec cilium-20210708233940-257783 stat /var/lib/dpkg/alternatives/iptables
	I0709 00:09:51.328418  459847 oci.go:278] the created container "cilium-20210708233940-257783" has a running status.
	I0709 00:09:51.328444  459847 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa...
	I0709 00:09:52.349805  459847 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0709 00:09:52.467565  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Status}}
	I0709 00:09:52.522663  459847 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0709 00:09:52.522682  459847 kic_runner.go:115] Args: [docker exec --privileged cilium-20210708233940-257783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0709 00:10:02.235626  459847 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210708233940-257783:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (12.1353503s)
	I0709 00:10:02.235646  459847 kic.go:188] duration metric: took 12.135451 seconds to extract preloaded images to volume
	I0709 00:10:02.235733  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Status}}
	I0709 00:10:02.299216  459847 machine.go:88] provisioning docker machine ...
	I0709 00:10:02.299247  459847 ubuntu.go:169] provisioning hostname "cilium-20210708233940-257783"
	I0709 00:10:02.299322  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:02.359928  459847 main.go:130] libmachine: Using SSH client type: native
	I0709 00:10:02.360096  459847 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49717 <nil> <nil>}
	I0709 00:10:02.360113  459847 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210708233940-257783 && echo "cilium-20210708233940-257783" | sudo tee /etc/hostname
	I0709 00:10:02.526521  459847 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210708233940-257783
	
	I0709 00:10:02.526588  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:02.583805  459847 main.go:130] libmachine: Using SSH client type: native
	I0709 00:10:02.583967  459847 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49717 <nil> <nil>}
	I0709 00:10:02.583992  459847 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210708233940-257783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210708233940-257783/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210708233940-257783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 00:10:02.703088  459847 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0709 00:10:02.703145  459847 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube}
	I0709 00:10:02.703178  459847 ubuntu.go:177] setting up certificates
	I0709 00:10:02.703207  459847 provision.go:83] configureAuth start
	I0709 00:10:02.703271  459847 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210708233940-257783
	I0709 00:10:02.750205  459847 provision.go:137] copyHostCerts
	I0709 00:10:02.750256  459847 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem, removing ...
	I0709 00:10:02.750263  459847 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem
	I0709 00:10:02.750314  459847 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.pem (1078 bytes)
	I0709 00:10:02.750381  459847 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem, removing ...
	I0709 00:10:02.750387  459847 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem
	I0709 00:10:02.750408  459847 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cert.pem (1123 bytes)
	I0709 00:10:02.750451  459847 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem, removing ...
	I0709 00:10:02.750455  459847 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem
	I0709 00:10:02.750474  459847 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/key.pem (1679 bytes)
	I0709 00:10:02.750506  459847 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem org=jenkins.cilium-20210708233940-257783 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210708233940-257783]
	I0709 00:10:03.056885  459847 provision.go:171] copyRemoteCerts
	I0709 00:10:03.056968  459847 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 00:10:03.057027  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:03.098454  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:03.182343  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0709 00:10:03.196963  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0709 00:10:03.211479  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 00:10:03.225871  459847 provision.go:86] duration metric: configureAuth took 522.643782ms
	I0709 00:10:03.225886  459847 ubuntu.go:193] setting minikube options for container-runtime
	I0709 00:10:03.226124  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:03.260897  459847 main.go:130] libmachine: Using SSH client type: native
	I0709 00:10:03.261046  459847 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x3703d0] 0x3703a0 <nil>  [] 0s} 127.0.0.1 49717 <nil> <nil>}
	I0709 00:10:03.261066  459847 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube
	I0709 00:10:03.395241  459847 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0709 00:10:03.395296  459847 machine.go:91] provisioned docker machine in 1.096059843s
	I0709 00:10:03.395315  459847 client.go:171] LocalClient.Create took 14.373929566s
	I0709 00:10:03.395338  459847 start.go:168] duration metric: libmachine.API.Create for "cilium-20210708233940-257783" took 14.37398513s
	I0709 00:10:03.395355  459847 start.go:267] post-start starting for "cilium-20210708233940-257783" (driver="docker")
	I0709 00:10:03.395376  459847 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 00:10:03.395436  459847 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 00:10:03.395482  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:03.439845  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:03.522408  459847 ssh_runner.go:149] Run: cat /etc/os-release
	I0709 00:10:03.525036  459847 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0709 00:10:03.525057  459847 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0709 00:10:03.525069  459847 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0709 00:10:03.525078  459847 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0709 00:10:03.525086  459847 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/addons for local assets ...
	I0709 00:10:03.525126  459847 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files for local assets ...
	I0709 00:10:03.525210  459847 start.go:270] post-start completed in 129.8346ms
	I0709 00:10:03.525477  459847 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210708233940-257783
	I0709 00:10:03.605228  459847 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/config.json ...
	I0709 00:10:03.605420  459847 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 00:10:03.605465  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:03.655792  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:03.739641  459847 start.go:129] duration metric: createHost completed in 14.724759166s
	I0709 00:10:03.739660  459847 start.go:80] releasing machines lock for "cilium-20210708233940-257783", held for 14.724868269s
	I0709 00:10:03.739743  459847 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210708233940-257783
	I0709 00:10:03.779182  459847 ssh_runner.go:149] Run: systemctl --version
	I0709 00:10:03.779225  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:03.779232  459847 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0709 00:10:03.779291  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:03.834524  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:03.843553  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:04.052708  459847 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0709 00:10:04.072701  459847 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0709 00:10:04.081210  459847 docker.go:153] disabling docker service ...
	I0709 00:10:04.081252  459847 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0709 00:10:04.089497  459847 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0709 00:10:04.097830  459847 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0709 00:10:04.177921  459847 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0709 00:10:04.258936  459847 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0709 00:10:04.266275  459847 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 00:10:04.282345  459847 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0709 00:10:04.289253  459847 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 00:10:04.296111  459847 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 00:10:04.304963  459847 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0709 00:10:04.392156  459847 ssh_runner.go:149] Run: sudo systemctl start crio
	I0709 00:10:04.570590  459847 start.go:386] Will wait 60s for socket path /var/run/crio/crio.sock
	I0709 00:10:04.570642  459847 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0709 00:10:04.573390  459847 start.go:411] Will wait 60s for crictl version
	I0709 00:10:04.573433  459847 ssh_runner.go:149] Run: sudo crictl version
	I0709 00:10:04.600708  459847 start.go:420] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0709 00:10:04.600763  459847 ssh_runner.go:149] Run: crio --version
	I0709 00:10:04.664434  459847 ssh_runner.go:149] Run: crio --version
	I0709 00:10:04.732502  459847 out.go:165] * Preparing Kubernetes v1.21.2 on CRI-O 1.20.3 ...
	I0709 00:10:04.732583  459847 cli_runner.go:115] Run: docker network inspect cilium-20210708233940-257783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0709 00:10:04.766359  459847 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0709 00:10:04.770034  459847 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 00:10:04.780068  459847 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0709 00:10:04.780146  459847 ssh_runner.go:149] Run: sudo crictl images --output json
	I0709 00:10:04.858526  459847 crio.go:424] all images are preloaded for cri-o runtime.
	I0709 00:10:04.858545  459847 crio.go:333] Images already preloaded, skipping extraction
	I0709 00:10:04.858593  459847 ssh_runner.go:149] Run: sudo crictl images --output json
	I0709 00:10:04.884796  459847 crio.go:424] all images are preloaded for cri-o runtime.
	I0709 00:10:04.884815  459847 cache_images.go:74] Images are preloaded, skipping loading
	I0709 00:10:04.884872  459847 ssh_runner.go:149] Run: crio config
	I0709 00:10:04.960121  459847 cni.go:93] Creating CNI manager for "cilium"
	I0709 00:10:04.960143  459847 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0709 00:10:04.960155  459847 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210708233940-257783 NodeName:cilium-20210708233940-257783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0709 00:10:04.960308  459847 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "cilium-20210708233940-257783"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0709 00:10:04.960400  459847 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=cilium-20210708233940-257783 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.2 ClusterName:cilium-20210708233940-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0709 00:10:04.960452  459847 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
	I0709 00:10:04.966966  459847 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 00:10:04.967032  459847 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 00:10:04.972417  459847 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0709 00:10:04.982492  459847 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 00:10:04.992928  459847 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
	I0709 00:10:05.003019  459847 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0709 00:10:05.005394  459847 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 00:10:05.012466  459847 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783 for IP: 192.168.58.2
	I0709 00:10:05.012503  459847 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key
	I0709 00:10:05.012520  459847 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key
	I0709 00:10:05.012575  459847 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/client.key
	I0709 00:10:05.012584  459847 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/client.crt with IP's: []
	I0709 00:10:05.844401  459847 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/client.crt ...
	I0709 00:10:05.844425  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/client.crt: {Name:mk822ea5f98390760e7ac53d213735e9aaaec6bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:05.844578  459847 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/client.key ...
	I0709 00:10:05.844593  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/client.key: {Name:mke22fa9f154e2aab302b9d4a7539671cc6e6fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:05.845116  459847 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.key.cee25041
	I0709 00:10:05.845130  459847 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0709 00:10:06.342890  459847 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.crt.cee25041 ...
	I0709 00:10:06.342911  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.crt.cee25041: {Name:mkb4d1c93c4bf7282eca59bee33b85fe107137d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:06.343061  459847 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.key.cee25041 ...
	I0709 00:10:06.343094  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.key.cee25041: {Name:mk911424117fc2f8022efe1a843f8a036b6d0f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:06.343206  459847 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.crt
	I0709 00:10:06.343277  459847 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.key
	I0709 00:10:06.343343  459847 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.key
	I0709 00:10:06.343364  459847 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.crt with IP's: []
	I0709 00:10:07.531896  459847 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.crt ...
	I0709 00:10:07.531926  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.crt: {Name:mk156e53d1cd8727694a6020b6a971c80f0ba19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:07.532093  459847 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.key ...
	I0709 00:10:07.532110  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.key: {Name:mkb348f7be63a5201240bd9503f8c7d278b1d23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:07.532778  459847 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem (1338 bytes)
	W0709 00:10:07.532821  459847 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783_empty.pem, impossibly tiny 0 bytes
	I0709 00:10:07.532834  459847 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca-key.pem (1675 bytes)
	I0709 00:10:07.532859  459847 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/ca.pem (1078 bytes)
	I0709 00:10:07.532887  459847 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/cert.pem (1123 bytes)
	I0709 00:10:07.532915  459847 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/key.pem (1679 bytes)
	I0709 00:10:07.533966  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0709 00:10:07.549179  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 00:10:07.562875  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 00:10:07.576863  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/cilium-20210708233940-257783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 00:10:07.590593  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 00:10:07.604214  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0709 00:10:07.620032  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 00:10:07.633438  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 00:10:07.647267  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/certs/257783.pem --> /usr/share/ca-certificates/257783.pem (1338 bytes)
	I0709 00:10:07.661126  459847 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 00:10:07.675092  459847 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 00:10:07.686250  459847 ssh_runner.go:149] Run: openssl version
	I0709 00:10:07.690330  459847 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257783.pem && ln -fs /usr/share/ca-certificates/257783.pem /etc/ssl/certs/257783.pem"
	I0709 00:10:07.696288  459847 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/257783.pem
	I0709 00:10:07.698675  459847 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jul  8 23:18 /usr/share/ca-certificates/257783.pem
	I0709 00:10:07.698726  459847 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257783.pem
	I0709 00:10:07.708434  459847 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257783.pem /etc/ssl/certs/51391683.0"
	I0709 00:10:07.717947  459847 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 00:10:07.725704  459847 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 00:10:07.728678  459847 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jul  8 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0709 00:10:07.728727  459847 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 00:10:07.733961  459847 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 00:10:07.740377  459847 kubeadm.go:390] StartCluster: {Name:cilium-20210708233940-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:cilium-20210708233940-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0709 00:10:07.740473  459847 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0709 00:10:07.740514  459847 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0709 00:10:07.763943  459847 cri.go:76] found id: ""
	I0709 00:10:07.763988  459847 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 00:10:07.774660  459847 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 00:10:07.779961  459847 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0709 00:10:07.780000  459847 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 00:10:07.786515  459847 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 00:10:07.786545  459847 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0709 00:10:08.123140  459847 out.go:192]   - Generating certificates and keys ...
	I0709 00:10:14.533365  459847 out.go:192]   - Booting up control plane ...
	I0709 00:10:34.105274  459847 out.go:192]   - Configuring RBAC rules ...
	I0709 00:10:34.548651  459847 cni.go:93] Creating CNI manager for "cilium"
	I0709 00:10:34.550561  459847 out.go:165] * Configuring Cilium (Container Networking Interface) ...
	I0709 00:10:34.550625  459847 ssh_runner.go:149] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0709 00:10:34.568001  459847 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.2/kubectl ...
	I0709 00:10:34.568048  459847 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (18465 bytes)
	I0709 00:10:34.579306  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 00:10:35.458989  459847 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 00:10:35.459096  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:35.459167  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=960468aa0cf6d681e9f0d567c8904e583bdf32d5 minikube.k8s.io/name=cilium-20210708233940-257783 minikube.k8s.io/updated_at=2021_07_09T00_10_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:35.604910  459847 ops.go:34] apiserver oom_adj: -16
	I0709 00:10:35.604959  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:36.283210  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:36.783548  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:37.283166  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:37.783336  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:38.283618  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:38.783198  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:39.283413  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:39.783304  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:40.282832  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:40.783345  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:41.282998  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:41.783130  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:42.283421  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:42.783042  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:43.283288  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:43.783352  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:44.282866  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:44.783171  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:45.283165  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:45.783061  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:46.282658  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:46.783262  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:47.283614  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:47.783098  459847 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 00:10:47.929553  459847 kubeadm.go:985] duration metric: took 12.47049428s to wait for elevateKubeSystemPrivileges.
	I0709 00:10:47.929576  459847 kubeadm.go:392] StartCluster complete in 40.18920498s
	I0709 00:10:47.929591  459847 settings.go:142] acquiring lock: {Name:mkd7e81a263e91a8570dc867d9c6f95db0e3f272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:47.929659  459847 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0709 00:10:47.931017  459847 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig: {Name:mk7ece99e42242db0c85d6c11531cc9d1c12a34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 00:10:48.514164  459847 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210708233940-257783" rescaled to 1
	I0709 00:10:48.514208  459847 start.go:220] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
	I0709 00:10:48.516373  459847 out.go:165] * Verifying Kubernetes components...
	I0709 00:10:48.516429  459847 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0709 00:10:48.514294  459847 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 00:10:48.514486  459847 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0709 00:10:48.516521  459847 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210708233940-257783"
	I0709 00:10:48.516535  459847 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210708233940-257783"
	W0709 00:10:48.516541  459847 addons.go:147] addon storage-provisioner should already be in state true
	I0709 00:10:48.516566  459847 host.go:66] Checking if "cilium-20210708233940-257783" exists ...
	I0709 00:10:48.516567  459847 addons.go:59] Setting default-storageclass=true in profile "cilium-20210708233940-257783"
	I0709 00:10:48.516581  459847 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210708233940-257783"
	I0709 00:10:48.516894  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Status}}
	I0709 00:10:48.517074  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Status}}
	I0709 00:10:48.619626  459847 out.go:165]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 00:10:48.619726  459847 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 00:10:48.619742  459847 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 00:10:48.619788  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:48.681990  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:48.725765  459847 addons.go:135] Setting addon default-storageclass=true in "cilium-20210708233940-257783"
	W0709 00:10:48.725786  459847 addons.go:147] addon default-storageclass should already be in state true
	I0709 00:10:48.725808  459847 host.go:66] Checking if "cilium-20210708233940-257783" exists ...
	I0709 00:10:48.726247  459847 cli_runner.go:115] Run: docker container inspect cilium-20210708233940-257783 --format={{.State.Status}}
	I0709 00:10:48.801550  459847 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 00:10:48.801571  459847 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 00:10:48.801620  459847 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210708233940-257783
	I0709 00:10:48.875573  459847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49717 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/cilium-20210708233940-257783/id_rsa Username:docker}
	I0709 00:10:49.024621  459847 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 00:10:49.058242  459847 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 00:10:49.122160  459847 node_ready.go:35] waiting up to 5m0s for node "cilium-20210708233940-257783" to be "Ready" ...
	I0709 00:10:49.122349  459847 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 00:10:49.133945  459847 node_ready.go:49] node "cilium-20210708233940-257783" has status "Ready":"True"
	I0709 00:10:49.133964  459847 node_ready.go:38] duration metric: took 11.779444ms waiting for node "cilium-20210708233940-257783" to be "Ready" ...
	I0709 00:10:49.133973  459847 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 00:10:49.193531  459847 pod_ready.go:78] waiting up to 5m0s for pod "cilium-fdhnb" in "kube-system" namespace to be "Ready" ...
	I0709 00:10:50.105343  459847 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.047047261s)
	I0709 00:10:50.105371  459847 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080700269s)
	I0709 00:10:50.110307  459847 out.go:165] * Enabled addons: default-storageclass, storage-provisioner
	I0709 00:10:50.110324  459847 addons.go:344] enableAddons completed in 1.595841405s
	I0709 00:10:50.105618  459847 start.go:730] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0709 00:10:51.241248  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:10:53.296302  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:10:55.431542  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:10:57.764793  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:00.248616  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:02.739796  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:04.748401  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:07.241465  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:09.245210  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:11.740387  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:14.240240  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:16.247041  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:18.745291  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:21.240555  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:23.739836  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:26.238676  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:28.243442  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:30.244554  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:32.740725  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:34.745011  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:37.241632  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:39.740095  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:41.740122  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:43.740833  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:45.741324  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:48.241797  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:50.740106  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:52.745650  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:54.748861  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:57.251095  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:11:59.741374  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:02.241620  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:04.739224  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:06.743970  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:09.240004  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:11.240118  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:13.240233  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:15.740452  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:17.740494  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:20.240435  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:22.240750  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:24.740571  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:26.741350  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:29.245968  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:31.741630  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:33.745814  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:36.239501  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:38.240058  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:40.240481  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:42.746055  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:45.240821  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:47.241339  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:49.741023  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:52.240337  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:54.240459  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:56.739686  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:12:58.741497  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:00.744454  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:03.240439  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:05.241322  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:07.741642  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:10.239543  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:12.740252  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:14.740711  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:17.239367  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:19.240652  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:21.740585  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:23.740777  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:26.244289  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:28.741813  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:31.239794  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:33.739739  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:35.740627  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:38.240634  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:40.739678  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:42.751243  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:45.239683  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:47.740242  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:50.239856  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:52.240055  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:54.740109  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:56.740606  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:13:59.240068  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:01.740205  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:04.240141  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:06.740122  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:08.740192  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:11.240245  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:13.742100  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:15.746190  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:18.239728  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:20.739574  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:22.739995  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:24.751958  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:27.239822  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:29.740622  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:32.241927  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:34.743435  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:37.240156  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:39.744964  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:42.239326  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:44.239850  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:46.740674  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:48.741952  459847 pod_ready.go:102] pod "cilium-fdhnb" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:49.244679  459847 pod_ready.go:81] duration metric: took 4m0.05111447s waiting for pod "cilium-fdhnb" in "kube-system" namespace to be "Ready" ...
	E0709 00:14:49.244703  459847 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0709 00:14:49.244711  459847 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace to be "Ready" ...
	I0709 00:14:51.256936  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:53.257246  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:55.754003  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:14:58.332669  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:00.491165  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:02.752519  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:04.753259  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:07.253815  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:09.752838  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:11.755836  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:14.255304  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:16.758434  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:19.253021  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:21.752887  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:23.752920  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:26.259726  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:28.753324  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:31.253283  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:33.753521  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:36.253257  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:38.752495  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:41.252586  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:43.252838  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:45.819163  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:48.252691  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:50.253683  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:52.752689  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:55.252601  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:57.253366  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:15:59.254132  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:01.752057  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:03.752677  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:05.753223  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:07.753667  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:10.252930  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:12.753112  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:14.753222  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:17.252296  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:19.252434  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:21.252871  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:23.753234  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:26.289107  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:28.752982  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:30.755275  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:33.252320  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:35.252779  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:37.253903  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:39.753009  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:41.753596  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:44.252452  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:46.253681  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:48.753695  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:50.760686  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:53.253754  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:55.258305  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:57.414508  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:16:59.753451  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:02.256598  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:04.753501  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:06.777215  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:09.253848  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:11.755059  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:13.766865  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:16.253448  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:18.752839  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:20.753048  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:22.753340  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:25.252416  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:27.254392  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:29.752901  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:31.753774  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:34.256614  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:36.753725  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:39.253310  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:41.758063  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:44.252850  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:46.753365  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:48.753898  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:50.754246  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:53.253736  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:55.753563  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:17:58.254370  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:00.752598  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:02.753403  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:05.252580  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:07.253726  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:09.753004  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:12.252646  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:14.256877  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:16.753449  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:19.252858  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:21.252950  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:23.753480  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:26.252442  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:28.254851  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:30.752847  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:32.753183  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:35.252716  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:37.254050  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:39.753731  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:42.252943  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:44.753163  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:47.253331  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:49.256503  459847 pod_ready.go:102] pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace has status "Ready":"False"
	I0709 00:18:49.256520  459847 pod_ready.go:81] duration metric: took 4m0.011801464s waiting for pod "cilium-operator-99d899fb5-f98p6" in "kube-system" namespace to be "Ready" ...
	E0709 00:18:49.256529  459847 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0709 00:18:49.256541  459847 pod_ready.go:38] duration metric: took 8m0.122534683s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 00:18:49.260889  459847 out.go:165] 
	W0709 00:18:49.261887  459847 out.go:230] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0709 00:18:49.261941  459847 out.go:230] * 
	* 
	W0709 00:18:49.264796  459847 out.go:230] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 00:18:49.267690  459847 out.go:165] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (540.68s)

                                                
                                    

Test pass (210/256)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 11.39
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.08
10 TestDownloadOnly/v1.21.2/json-events 14.47
11 TestDownloadOnly/v1.21.2/preload-exists 0
15 TestDownloadOnly/v1.21.2/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-beta.0/json-events 12.82
18 TestDownloadOnly/v1.22.0-beta.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-beta.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.35
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
31 TestAddons/parallel/MetricsServer 5.62
34 TestAddons/parallel/CSI 43.79
35 TestAddons/parallel/GCPAuth 15.64
36 TestCertOptions 48.43
38 TestForceSystemdFlag 48.67
39 TestForceSystemdEnv 64.84
44 TestErrorSpam/setup 47.15
45 TestErrorSpam/start 0.88
46 TestErrorSpam/status 0.92
47 TestErrorSpam/pause 5.54
48 TestErrorSpam/unpause 1.48
49 TestErrorSpam/stop 9.39
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 100.09
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 4.89
56 TestFunctional/serial/KubeContext 0.05
57 TestFunctional/serial/KubectlGetPods 0.28
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.82
61 TestFunctional/serial/CacheCmd/cache/add_local 1.16
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.39
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
69 TestFunctional/serial/ExtraConfig 39.02
70 TestFunctional/serial/ComponentHealth 0.09
72 TestFunctional/parallel/ConfigCmd 0.44
73 TestFunctional/parallel/DashboardCmd 2.67
74 TestFunctional/parallel/DryRun 0.5
75 TestFunctional/parallel/InternationalLanguage 0.22
76 TestFunctional/parallel/StatusCmd 0.92
77 TestFunctional/parallel/LogsCmd 1.06
78 TestFunctional/parallel/LogsFileCmd 1.39
79 TestFunctional/parallel/MountCmd 5.78
81 TestFunctional/parallel/ServiceCmd 13.53
82 TestFunctional/parallel/AddonsCmd 0.16
83 TestFunctional/parallel/PersistentVolumeClaim 28.53
85 TestFunctional/parallel/SSHCmd 0.53
86 TestFunctional/parallel/CpCmd 0.52
88 TestFunctional/parallel/FileSync 0.39
89 TestFunctional/parallel/CertSync 0.89
93 TestFunctional/parallel/NodeLabels 0.08
94 TestFunctional/parallel/LoadImage 2.12
95 TestFunctional/parallel/RemoveImage 2.62
96 TestFunctional/parallel/BuildImage 2.64
97 TestFunctional/parallel/ListImages 0.46
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
100 TestFunctional/parallel/Version/short 0.07
101 TestFunctional/parallel/Version/components 11.22
102 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
103 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
104 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
105 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
106 TestFunctional/parallel/ProfileCmd/profile_list 0.36
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
112 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
116 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
117 TestFunctional/delete_busybox_image 0.08
118 TestFunctional/delete_my-image_image 0.03
119 TestFunctional/delete_minikube_cached_images 0.03
123 TestJSONOutput/start/Audit 0
125 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
126 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
128 TestJSONOutput/pause/Audit 0
130 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
131 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
133 TestJSONOutput/unpause/Audit 0
135 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
138 TestJSONOutput/stop/Audit 0
140 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
142 TestErrorJSONOutput 0.28
144 TestKicCustomNetwork/create_custom_network 56.97
145 TestKicCustomNetwork/use_default_bridge_network 45.18
146 TestKicExistingNetwork 46.18
147 TestMainNoArgs 0.06
150 TestMultiNode/serial/FreshStart2Nodes 139.02
151 TestMultiNode/serial/DeployApp2Nodes 4.92
152 TestMultiNode/serial/PingHostFrom2Pods 1.13
153 TestMultiNode/serial/AddNode 30.27
154 TestMultiNode/serial/ProfileList 0.3
155 TestMultiNode/serial/CopyFile 2.34
156 TestMultiNode/serial/StopNode 2.44
157 TestMultiNode/serial/StartAfterStop 37.08
158 TestMultiNode/serial/RestartKeepsNodes 153.84
159 TestMultiNode/serial/DeleteNode 5.42
160 TestMultiNode/serial/StopMultiNode 40.48
161 TestMultiNode/serial/RestartMultiNode 101.15
162 TestMultiNode/serial/ValidateNameConflict 54.2
168 TestDebPackageInstall/install_arm64_debian:sid/minikube 0
171 TestDebPackageInstall/install_arm64_debian:latest/minikube 0
174 TestDebPackageInstall/install_arm64_debian:10/minikube 0
177 TestDebPackageInstall/install_arm64_debian:9/minikube 0
180 TestDebPackageInstall/install_arm64_ubuntu:latest/minikube 0
183 TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube 0
186 TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube 0
189 TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube 0
193 TestScheduledStopUnix 70.39
196 TestInsufficientStorage 20.64
197 TestRunningBinaryUpgrade 95.6
199 TestKubernetesUpgrade 130.34
205 TestPause/serial/Start 308.54
217 TestNetworkPlugins/group/false 0.85
221 TestPause/serial/SecondStartNoReconfiguration 5.73
224 TestPause/serial/Unpause 0.52
226 TestPause/serial/DeletePaused 2.61
227 TestPause/serial/VerifyDeletedResources 0.2
229 TestStartStop/group/old-k8s-version/serial/FirstStart 123.41
230 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
232 TestStartStop/group/no-preload/serial/FirstStart 113.03
233 TestStartStop/group/old-k8s-version/serial/DeployApp 8.52
234 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
235 TestStartStop/group/no-preload/serial/DeployApp 8.76
236 TestStartStop/group/old-k8s-version/serial/Stop 20.42
237 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
238 TestStartStop/group/no-preload/serial/Stop 20.3
239 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
240 TestStartStop/group/old-k8s-version/serial/SecondStart 653.03
241 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
242 TestStartStop/group/no-preload/serial/SecondStart 363.17
243 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.05
244 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
245 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
246 TestStartStop/group/no-preload/serial/Pause 2.78
248 TestStartStop/group/embed-certs/serial/FirstStart 100.48
249 TestStartStop/group/embed-certs/serial/DeployApp 8.48
250 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
251 TestStartStop/group/embed-certs/serial/Stop 20.28
252 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
253 TestStartStop/group/embed-certs/serial/SecondStart 364.14
254 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
255 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.31
256 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
257 TestStartStop/group/old-k8s-version/serial/Pause 2.8
259 TestStartStop/group/default-k8s-different-port/serial/FirstStart 108.77
260 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.73
261 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.01
262 TestStartStop/group/default-k8s-different-port/serial/Stop 20.36
263 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
264 TestStartStop/group/default-k8s-different-port/serial/SecondStart 360.78
265 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
266 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.32
267 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
268 TestStartStop/group/embed-certs/serial/Pause 2.78
270 TestStartStop/group/newest-cni/serial/FirstStart 75.67
271 TestStartStop/group/newest-cni/serial/DeployApp 0
272 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.73
273 TestStartStop/group/newest-cni/serial/Stop 1.41
274 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
275 TestStartStop/group/newest-cni/serial/SecondStart 27.6
276 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
277 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
278 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
279 TestStartStop/group/newest-cni/serial/Pause 2.56
280 TestNetworkPlugins/group/auto/Start 109.39
281 TestNetworkPlugins/group/auto/KubeletFlags 0.27
282 TestNetworkPlugins/group/auto/NetCatPod 11.36
283 TestNetworkPlugins/group/auto/DNS 0.3
284 TestNetworkPlugins/group/auto/Localhost 0.22
285 TestNetworkPlugins/group/auto/HairPin 0.27
286 TestNetworkPlugins/group/custom-weave/Start 90.75
287 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.05
288 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.1
289 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.29
290 TestStartStop/group/default-k8s-different-port/serial/Pause 3.18
292 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.4
293 TestNetworkPlugins/group/custom-weave/NetCatPod 11.46
294 TestNetworkPlugins/group/calico/Start 90.51
295 TestNetworkPlugins/group/calico/ControllerPod 5.03
296 TestNetworkPlugins/group/calico/KubeletFlags 0.31
297 TestNetworkPlugins/group/calico/NetCatPod 12.45
298 TestNetworkPlugins/group/calico/DNS 0.24
299 TestNetworkPlugins/group/calico/Localhost 0.2
300 TestNetworkPlugins/group/calico/HairPin 0.19
301 TestNetworkPlugins/group/enable-default-cni/Start 112.45
302 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
303 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.45
304 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
305 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
306 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
307 TestNetworkPlugins/group/kindnet/Start 101
308 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
310 TestNetworkPlugins/group/kindnet/NetCatPod 10.55
311 TestNetworkPlugins/group/kindnet/DNS 0.18
312 TestNetworkPlugins/group/kindnet/Localhost 0.17
313 TestNetworkPlugins/group/kindnet/HairPin 0.19
314 TestNetworkPlugins/group/bridge/Start 102.75
315 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
316 TestNetworkPlugins/group/bridge/NetCatPod 10.35
317 TestNetworkPlugins/group/bridge/DNS 0.19
318 TestNetworkPlugins/group/bridge/Localhost 0.17
319 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.14.0/json-events (11.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210708230110-257783 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210708230110-257783 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.393026996s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (11.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210708230110-257783
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210708230110-257783: exit status 85 (75.697199ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:01:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:01:10.241417  257790 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:01:10.241573  257790 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:01:10.241582  257790 out.go:299] Setting ErrFile to fd 2...
	I0708 23:01:10.241586  257790 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:01:10.241708  257790 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	W0708 23:01:10.241818  257790 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/config/config.json: no such file or directory
	I0708 23:01:10.242042  257790 out.go:293] Setting JSON to true
	I0708 23:01:10.242826  257790 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6219,"bootTime":1625779051,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:01:10.242898  257790 start.go:121] virtualization:  
	I0708 23:01:10.246235  257790 notify.go:169] Checking for updates...
	I0708 23:01:10.250143  257790 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:01:10.297586  257790 docker.go:132] docker version: linux-20.10.7
	I0708 23:01:10.297672  257790 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:01:10.395370  257790 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:01:10.339868294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:01:10.395471  257790 docker.go:244] overlay module found
	I0708 23:01:10.400960  257790 start.go:278] selected driver: docker
	I0708 23:01:10.400975  257790 start.go:751] validating driver "docker" against <nil>
	I0708 23:01:10.401113  257790 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:01:10.481580  257790 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:01:10.431071811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:01:10.481683  257790 start_flags.go:261] no existing cluster config was found, will generate one from the flags 
	I0708 23:01:10.481940  257790 start_flags.go:342] Using suggested 2200MB memory alloc based on sys=7846MB, container=7846MB
	I0708 23:01:10.482036  257790 start_flags.go:669] Wait components to verify : map[apiserver:true system_pods:true]
	I0708 23:01:10.482052  257790 cni.go:93] Creating CNI manager for ""
	I0708 23:01:10.482060  257790 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:01:10.482070  257790 start_flags.go:270] Found "CNI" CNI - setting NetworkPlugin=cni
	I0708 23:01:10.482080  257790 start_flags.go:275] config:
	{Name:download-only-20210708230110-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210708230110-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:01:10.484167  257790 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:01:10.485878  257790 preload.go:134] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0708 23:01:10.485978  257790 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:01:10.521495  257790 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:01:10.521532  257790 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:01:10.574049  257790 preload.go:120] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-arm64.tar.lz4
	I0708 23:01:10.574067  257790 cache.go:56] Caching tarball of preloaded images
	I0708 23:01:10.574248  257790 preload.go:134] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0708 23:01:10.576457  257790 preload.go:238] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-arm64.tar.lz4 ...
	I0708 23:01:10.912460  257790 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:cf30aa39f3672cea6231f7dad15418f7 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-arm64.tar.lz4
	I0708 23:01:19.145136  257790 preload.go:248] saving checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-arm64.tar.lz4 ...
	I0708 23:01:19.145207  257790 preload.go:255] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210708230110-257783"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.2/json-events (14.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210708230110-257783 --force --alsologtostderr --kubernetes-version=v1.21.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210708230110-257783 --force --alsologtostderr --kubernetes-version=v1.21.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.469567944s)
--- PASS: TestDownloadOnly/v1.21.2/json-events (14.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.2/preload-exists
--- PASS: TestDownloadOnly/v1.21.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.2/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210708230110-257783
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210708230110-257783: exit status 85 (69.694061ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:01:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:01:21.715890  257886 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:01:21.715961  257886 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:01:21.715965  257886 out.go:299] Setting ErrFile to fd 2...
	I0708 23:01:21.715968  257886 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:01:21.716084  257886 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	W0708 23:01:21.716200  257886 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/config/config.json: no such file or directory
	I0708 23:01:21.716297  257886 out.go:293] Setting JSON to true
	I0708 23:01:21.717084  257886 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6231,"bootTime":1625779051,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:01:21.717163  257886 start.go:121] virtualization:  
	I0708 23:01:21.720332  257886 notify.go:169] Checking for updates...
	W0708 23:01:21.724745  257886 start.go:659] api.Load failed for download-only-20210708230110-257783: filestore "download-only-20210708230110-257783": Docker machine "download-only-20210708230110-257783" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0708 23:01:21.724807  257886 driver.go:335] Setting default libvirt URI to qemu:///system
	W0708 23:01:21.724833  257886 start.go:659] api.Load failed for download-only-20210708230110-257783: filestore "download-only-20210708230110-257783": Docker machine "download-only-20210708230110-257783" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0708 23:01:21.770264  257886 docker.go:132] docker version: linux-20.10.7
	I0708 23:01:21.770341  257886 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:01:21.874575  257886 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:01:21.821426248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:01:21.874680  257886 docker.go:244] overlay module found
	I0708 23:01:21.878546  257886 start.go:278] selected driver: docker
	I0708 23:01:21.878558  257886 start.go:751] validating driver "docker" against &{Name:download-only-20210708230110-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210708230110-257783 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:01:21.878718  257886 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:01:21.963752  257886 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:01:21.908089665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:01:21.964084  257886 cni.go:93] Creating CNI manager for ""
	I0708 23:01:21.964097  257886 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:01:21.964106  257886 start_flags.go:275] config:
	{Name:download-only-20210708230110-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:download-only-20210708230110-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:01:21.968288  257886 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:01:21.971581  257886 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:01:21.971673  257886 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:01:22.005447  257886 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:01:22.005479  257886 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:01:22.267120  257886 preload.go:120] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	I0708 23:01:22.267140  257886 cache.go:56] Caching tarball of preloaded images
	I0708 23:01:22.267355  257886 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime crio
	I0708 23:01:22.270238  257886 preload.go:238] getting checksum for preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4 ...
	I0708 23:01:22.620330  257886 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:031f4ea1aed1bcb991b5bbb447369481 -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210708230110-257783"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-beta.0/json-events (12.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-beta.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210708230110-257783 --force --alsologtostderr --kubernetes-version=v1.22.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210708230110-257783 --force --alsologtostderr --kubernetes-version=v1.22.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.81947687s)
--- PASS: TestDownloadOnly/v1.22.0-beta.0/json-events (12.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-beta.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210708230110-257783
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210708230110-257783: exit status 85 (68.722887ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/07/08 23:01:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0708 23:01:36.256378  257979 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:01:36.256445  257979 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:01:36.256454  257979 out.go:299] Setting ErrFile to fd 2...
	I0708 23:01:36.256457  257979 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:01:36.256582  257979 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	W0708 23:01:36.256692  257979 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/config/config.json: no such file or directory
	I0708 23:01:36.256794  257979 out.go:293] Setting JSON to true
	I0708 23:01:36.257544  257979 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6245,"bootTime":1625779051,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:01:36.257607  257979 start.go:121] virtualization:  
	I0708 23:01:36.261366  257979 notify.go:169] Checking for updates...
	W0708 23:01:36.264081  257979 start.go:659] api.Load failed for download-only-20210708230110-257783: filestore "download-only-20210708230110-257783": Docker machine "download-only-20210708230110-257783" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0708 23:01:36.264141  257979 driver.go:335] Setting default libvirt URI to qemu:///system
	W0708 23:01:36.264166  257979 start.go:659] api.Load failed for download-only-20210708230110-257783: filestore "download-only-20210708230110-257783": Docker machine "download-only-20210708230110-257783" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0708 23:01:36.308711  257979 docker.go:132] docker version: linux-20.10.7
	I0708 23:01:36.308790  257979 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:01:36.415862  257979 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:01:36.361135891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:01:36.415962  257979 docker.go:244] overlay module found
	I0708 23:01:36.418961  257979 start.go:278] selected driver: docker
	I0708 23:01:36.418972  257979 start.go:751] validating driver "docker" against &{Name:download-only-20210708230110-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:download-only-20210708230110-257783 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:01:36.419132  257979 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:01:36.498558  257979 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:01:36.448141481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:01:36.498885  257979 cni.go:93] Creating CNI manager for ""
	I0708 23:01:36.498899  257979 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0708 23:01:36.498906  257979 start_flags.go:275] config:
	{Name:download-only-20210708230110-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-beta.0 ClusterName:download-only-20210708230110-257783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:01:36.501408  257979 cache.go:117] Beginning downloading kic base image for docker with crio
	I0708 23:01:36.503505  257979 preload.go:134] Checking if preload exists for k8s version v1.22.0-beta.0 and runtime crio
	I0708 23:01:36.503593  257979 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0708 23:01:36.538698  257979 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0708 23:01:36.538724  257979 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0708 23:01:36.595799  257979 preload.go:120] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0708 23:01:36.595814  257979 cache.go:56] Caching tarball of preloaded images
	I0708 23:01:36.595995  257979 preload.go:134] Checking if preload exists for k8s version v1.22.0-beta.0 and runtime crio
	I0708 23:01:36.614105  257979 preload.go:238] getting checksum for preloaded-images-k8s-v11-v1.22.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0708 23:01:36.759883  257979 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:ac85680351232479b05cc257646d681e -> /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210708230110-257783"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-20210708230110-257783
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: metrics-server stabilized in 1.71922ms
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:340: "metrics-server-77c99ccb96-g7fdg" [d976d39d-49f5-4cfd-8756-cdcf9a8caa2a] Running
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00544979s
addons_test.go:382: (dbg) Run:  kubectl --context addons-20210708230204-257783 top pods -n kube-system
addons_test.go:387: kubectl --context addons-20210708230204-257783 top pods -n kube-system: unexpected stderr: W0708 23:09:06.482316  271343 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
addons_test.go:399: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 6.344945ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-20210708230204-257783 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210708230204-257783 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-20210708230204-257783 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:340: "task-pv-pod" [cc987a7f-7c18-400d-9331-a47cfbc2593e] Pending
helpers_test.go:340: "task-pv-pod" [cc987a7f-7c18-400d-9331-a47cfbc2593e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:340: "task-pv-pod" [cc987a7f-7c18-400d-9331-a47cfbc2593e] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.007827545s
addons_test.go:562: (dbg) Run:  kubectl --context addons-20210708230204-257783 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210708230204-257783 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-20210708230204-257783 delete pod task-pv-pod
addons_test.go:572: (dbg) Done: kubectl --context addons-20210708230204-257783 delete pod task-pv-pod: (1.779248456s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-20210708230204-257783 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-20210708230204-257783 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210708230204-257783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-20210708230204-257783 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:340: "task-pv-pod-restore" [c13a3091-6ac2-4a57-b59c-f279f6cec58f] Pending
helpers_test.go:340: "task-pv-pod-restore" [c13a3091-6ac2-4a57-b59c-f279f6cec58f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:340: "task-pv-pod-restore" [c13a3091-6ac2-4a57-b59c-f279f6cec58f] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.007741956s
addons_test.go:604: (dbg) Run:  kubectl --context addons-20210708230204-257783 delete pod task-pv-pod-restore
addons_test.go:604: (dbg) Done: kubectl --context addons-20210708230204-257783 delete pod task-pv-pod-restore: (2.41428875s)
addons_test.go:608: (dbg) Run:  kubectl --context addons-20210708230204-257783 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-20210708230204-257783 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.169737237s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.79s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (15.64s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:631: (dbg) Run:  kubectl --context addons-20210708230204-257783 create -f testdata/busybox.yaml
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [77d47199-b092-4a25-9d5e-ecbd20e1690b] Pending
helpers_test.go:340: "busybox" [77d47199-b092-4a25-9d5e-ecbd20e1690b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [77d47199-b092-4a25-9d5e-ecbd20e1690b] Running
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.009805126s
addons_test.go:643: (dbg) Run:  kubectl --context addons-20210708230204-257783 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:656: (dbg) Run:  kubectl --context addons-20210708230204-257783 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:680: (dbg) Run:  kubectl --context addons-20210708230204-257783 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:709: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:709: (dbg) Done: out/minikube-linux-arm64 -p addons-20210708230204-257783 addons disable gcp-auth --alsologtostderr -v=1: (5.739295355s)
--- PASS: TestAddons/parallel/GCPAuth (15.64s)

                                                
                                    
x
+
TestCertOptions (48.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-20210708234133-257783 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0708 23:41:36.692439  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
cert_options_test.go:47: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-20210708234133-257783 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (45.367094301s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-20210708234133-257783 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210708234133-257783 config view
helpers_test.go:176: Cleaning up "cert-options-20210708234133-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-20210708234133-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-20210708234133-257783: (2.736130201s)
--- PASS: TestCertOptions (48.43s)

                                                
                                    
x
+
TestForceSystemdFlag (48.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-20210708234044-257783 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-20210708234044-257783 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.718351359s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210708234044-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-20210708234044-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-20210708234044-257783: (2.950957106s)
--- PASS: TestForceSystemdFlag (48.67s)

                                                
                                    
x
+
TestForceSystemdEnv (64.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-20210708233940-257783 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0708 23:40:04.592627  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
docker_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-20210708233940-257783 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m2.026754753s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210708233940-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-20210708233940-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-20210708233940-257783: (2.811357406s)
--- PASS: TestForceSystemdEnv (64.84s)

                                                
                                    
x
+
TestErrorSpam/setup (47.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-20210708231743-257783 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210708231743-257783 --driver=docker  --container-runtime=crio
error_spam_test.go:78: (dbg) Done: out/minikube-linux-arm64 start -p nospam-20210708231743-257783 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210708231743-257783 --driver=docker  --container-runtime=crio: (47.145373095s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (47.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 start --dry-run
--- PASS: TestErrorSpam/start (0.88s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (5.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 pause: (4.659128533s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 pause
--- PASS: TestErrorSpam/pause (5.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (9.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 stop: (9.134707521s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210708231743-257783 --log_dir /tmp/nospam-20210708231743-257783 stop
--- PASS: TestErrorSpam/stop (9.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1556: local sync path: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/files/etc/test/nested/copy/257783/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1881: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210708231854-257783 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0708 23:20:04.592884  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:04.598490  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:04.608793  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:04.628986  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:04.669158  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:04.749419  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:04.909716  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:05.230200  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:05.870691  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:07.150863  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:09.711047  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:14.831213  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:20:25.072325  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
functional_test.go:1881: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210708231854-257783 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m40.089739759s)
--- PASS: TestFunctional/serial/StartWithProxy (100.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:589: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210708231854-257783 --alsologtostderr -v=8
functional_test.go:589: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210708231854-257783 --alsologtostderr -v=8: (4.890858247s)
functional_test.go:593: soft start took 4.891642491s for "functional-20210708231854-257783" cluster.
--- PASS: TestFunctional/serial/SoftStart (4.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:609: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:622: (dbg) Run:  kubectl --context functional-20210708231854-257783 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:944: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add k8s.gcr.io/pause:3.1
functional_test.go:944: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add k8s.gcr.io/pause:3.1: (2.09972437s)
functional_test.go:944: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add k8s.gcr.io/pause:3.3
functional_test.go:944: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add k8s.gcr.io/pause:3.3: (1.976812737s)
functional_test.go:944: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add k8s.gcr.io/pause:latest
functional_test.go:944: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add k8s.gcr.io/pause:latest: (1.741431739s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:974: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210708231854-257783 /tmp/functional-20210708231854-257783893914649
E0708 23:20:45.552774  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
functional_test.go:986: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cache add minikube-local-cache-test:functional-20210708231854-257783
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cache delete minikube-local-cache-test:functional-20210708231854-257783
functional_test.go:980: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210708231854-257783
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:998: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1005: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1018: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1046: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1046: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (283.209748ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1051: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cache reload
functional_test.go:1051: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 cache reload: (1.332859152s)
functional_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1065: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1065: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 kubectl -- --context functional-20210708231854-257783 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:663: (dbg) Run:  out/kubectl --context functional-20210708231854-257783 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:677: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210708231854-257783 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0708 23:21:26.513314  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
functional_test.go:677: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210708231854-257783 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.019268349s)
functional_test.go:681: restart took 39.019364503s for "functional-20210708231854-257783" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:728: (dbg) Run:  kubectl --context functional-20210708231854-257783 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:742: etcd phase: Running
functional_test.go:752: etcd status: Ready
functional_test.go:742: kube-apiserver phase: Running
functional_test.go:752: kube-apiserver status: Ready
functional_test.go:742: kube-controller-manager phase: Running
functional_test.go:752: kube-controller-manager status: Ready
functional_test.go:742: kube-scheduler phase: Running
functional_test.go:752: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1091: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1091: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 config get cpus
functional_test.go:1091: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210708231854-257783 config get cpus: exit status 14 (74.620789ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1091: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 config set cpus 2
functional_test.go:1091: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 config get cpus
functional_test.go:1091: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 config unset cpus
functional_test.go:1091: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1091: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210708231854-257783 config get cpus: exit status 14 (66.829691ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:819: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url -p functional-20210708231854-257783 --alsologtostderr -v=1]
2021/07/08 23:22:11 [DEBUG] GET http://127.0.0.1:41113/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:824: (dbg) stopping [out/minikube-linux-arm64 dashboard --url -p functional-20210708231854-257783 --alsologtostderr -v=1] ...
helpers_test.go:504: unable to kill pid 291098: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:881: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210708231854-257783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:881: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210708231854-257783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.354515ms)

                                                
                                                
-- stdout --
	* [functional-20210708231854-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:22:08.236690  290819 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:22:08.236757  290819 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:22:08.236766  290819 out.go:299] Setting ErrFile to fd 2...
	I0708 23:22:08.236769  290819 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:22:08.236891  290819 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:22:08.237094  290819 out.go:293] Setting JSON to false
	I0708 23:22:08.237876  290819 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7477,"bootTime":1625779051,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:22:08.237947  290819 start.go:121] virtualization:  
	I0708 23:22:08.240645  290819 out.go:165] * [functional-20210708231854-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:22:08.242838  290819 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:22:08.244722  290819 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:22:08.246453  290819 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:22:08.248214  290819 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:22:08.249035  290819 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:22:08.297499  290819 docker.go:132] docker version: linux-20.10.7
	I0708 23:22:08.297576  290819 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:22:08.379790  290819 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:39 SystemTime:2021-07-08 23:22:08.329502248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:22:08.379880  290819 docker.go:244] overlay module found
	I0708 23:22:08.381970  290819 out.go:165] * Using the docker driver based on existing profile
	I0708 23:22:08.381987  290819 start.go:278] selected driver: docker
	I0708 23:22:08.381992  290819 start.go:751] validating driver "docker" against &{Name:functional-20210708231854-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:functional-20210708231854-257783 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-glus
ter:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:22:08.382098  290819 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0708 23:22:08.382136  290819 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0708 23:22:08.382152  290819 out.go:230] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0708 23:22:08.383814  290819 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0708 23:22:08.385937  290819 out.go:165] 
	W0708 23:22:08.385997  290819 out.go:230] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0708 23:22:08.387664  290819 out.go:165] 

                                                
                                                
** /stderr **
functional_test.go:896: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210708231854-257783 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:918: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210708231854-257783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:918: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210708231854-257783 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (217.511602ms)

                                                
                                                
-- stdout --
	* [functional-20210708231854-257783] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:22:08.747672  290945 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:22:08.747774  290945 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:22:08.747787  290945 out.go:299] Setting ErrFile to fd 2...
	I0708 23:22:08.747791  290945 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:22:08.747960  290945 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:22:08.748169  290945 out.go:293] Setting JSON to false
	I0708 23:22:08.748975  290945 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7478,"bootTime":1625779051,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:22:08.749044  290945 start.go:121] virtualization:  
	I0708 23:22:08.751424  290945 out.go:165] * [functional-20210708231854-257783] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	I0708 23:22:08.753867  290945 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:22:08.755681  290945 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:22:08.757984  290945 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:22:08.759624  290945 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:22:08.760440  290945 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:22:08.808535  290945 docker.go:132] docker version: linux-20.10.7
	I0708 23:22:08.808612  290945 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:22:08.891935  290945 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:39 SystemTime:2021-07-08 23:22:08.839942454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLic
ense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:22:08.892035  290945 docker.go:244] overlay module found
	I0708 23:22:08.896670  290945 out.go:165] * Utilisation du pilote docker basé sur le profil existant
	I0708 23:22:08.896689  290945 start.go:278] selected driver: docker
	I0708 23:22:08.896694  290945 start.go:751] validating driver "docker" against &{Name:functional-20210708231854-257783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:functional-20210708231854-257783 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-glus
ter:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0708 23:22:08.896819  290945 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0708 23:22:08.896854  290945 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0708 23:22:08.896870  290945 out.go:230] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0708 23:22:08.899409  290945 out.go:165]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0708 23:22:08.901908  290945 out.go:165] 
	W0708 23:22:08.902055  290945 out.go:230] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0708 23:22:08.904147  290945 out.go:165] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:771: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 status
functional_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:788: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1127: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 logs: (1.063376883s)
--- PASS: TestFunctional/parallel/LogsCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsFileCmd
=== PAUSE TestFunctional/parallel/LogsFileCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 logs --file /tmp/functional-20210708231854-257783717259187/logs.txt
functional_test.go:1143: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 logs --file /tmp/functional-20210708231854-257783717259187/logs.txt: (1.389664687s)
--- PASS: TestFunctional/parallel/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210708231854-257783 /tmp/mounttest328506532:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1625786519951341792" to /tmp/mounttest328506532/created-by-test
functional_test_mount_test.go:107: wrote "test-1625786519951341792" to /tmp/mounttest328506532/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1625786519951341792" to /tmp/mounttest328506532/test-1625786519951341792
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.355059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  8 23:21 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  8 23:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  8 23:21 test-1625786519951341792
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh cat /mount-9p/test-1625786519951341792
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-20210708231854-257783 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:340: "busybox-mount" [6a0c9de9-f72f-4632-8af9-cbf0b8d670b9] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:340: "busybox-mount" [6a0c9de9-f72f-4632-8af9-cbf0b8d670b9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:340: "busybox-mount" [6a0c9de9-f72f-4632-8af9-cbf0b8d670b9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 3.005936212s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-20210708231854-257783 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210708231854-257783 /tmp/mounttest328506532:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1317: (dbg) Run:  kubectl --context functional-20210708231854-257783 create deployment hello-node --image=k8s.gcr.io/echoserver-arm:1.8
functional_test.go:1325: (dbg) Run:  kubectl --context functional-20210708231854-257783 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1330: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:340: "hello-node-6d98884d59-tl5ms" [94dc9fbe-e8fc-4fdc-9469-295c4b4ac39f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:340: "hello-node-6d98884d59-tl5ms" [94dc9fbe-e8fc-4fdc-9469-295c4b4ac39f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1330: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.007852354s
functional_test.go:1334: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 service list
functional_test.go:1347: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 service --namespace=default --https --url hello-node
functional_test.go:1356: found endpoint: https://192.168.49.2:32018
functional_test.go:1367: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 service hello-node --url --format={{.IP}}
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 service hello-node --url
functional_test.go:1382: found endpoint for hello-node: http://192.168.49.2:32018
functional_test.go:1393: Attempting to fetch http://192.168.49.2:32018 ...
functional_test.go:1412: http://192.168.49.2:32018: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6d98884d59-tl5ms

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32018
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (13.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1427: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 addons list
functional_test.go:1438: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:340: "storage-provisioner" [19cfa496-0596-43ec-8758-c7ad6294e16f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00652108s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210708231854-257783 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210708231854-257783 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210708231854-257783 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210708231854-257783 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [0a777b93-310a-4bc5-9eaf-14e51e68383b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:340: "sp-pod" [0a777b93-310a-4bc5-9eaf-14e51e68383b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:340: "sp-pod" [0a777b93-310a-4bc5-9eaf-14e51e68383b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00634436s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210708231854-257783 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210708231854-257783 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210708231854-257783 delete -f testdata/storage-provisioner/pod.yaml: (3.173186663s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210708231854-257783 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [138941c6-27ae-4c1d-9d8c-e10e5a4376ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:340: "sp-pod" [138941c6-27ae-4c1d-9d8c-e10e5a4376ec] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.008619535s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210708231854-257783 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "echo hello"
functional_test.go:1477: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1604: Checking for existence of /etc/test/nested/copy/257783/hosts within VM
functional_test.go:1605: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo cat /etc/test/nested/copy/257783/hosts"
functional_test.go:1610: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1645: Checking for existence of /etc/ssl/certs/257783.pem within VM
functional_test.go:1646: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo cat /etc/ssl/certs/257783.pem"
functional_test.go:1645: Checking for existence of /usr/share/ca-certificates/257783.pem within VM
functional_test.go:1646: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo cat /usr/share/ca-certificates/257783.pem"
functional_test.go:1645: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1646: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20210708231854-257783 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:238: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:245: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210708231854-257783
functional_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 image load docker.io/library/busybox:load-functional-20210708231854-257783

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:321: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210708231854-257783 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210708231854-257783
--- PASS: TestFunctional/parallel/LoadImage (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:279: (dbg) Run:  docker pull busybox:1.32
functional_test.go:286: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210708231854-257783
functional_test.go:292: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 image load docker.io/library/busybox:remove-functional-20210708231854-257783
functional_test.go:292: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 image load docker.io/library/busybox:remove-functional-20210708231854-257783: (1.008192298s)
functional_test.go:298: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 image rm docker.io/library/busybox:remove-functional-20210708231854-257783

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:335: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210708231854-257783 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 image build -t localhost/my-image:functional-20210708231854-257783 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:359: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 image build -t localhost/my-image:functional-20210708231854-257783 testdata/build: (2.262972038s)
functional_test.go:364: (dbg) Stdout: out/minikube-linux-arm64 -p functional-20210708231854-257783 image build -t localhost/my-image:functional-20210708231854-257783 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> 37ff10210e2
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210708231854-257783
--> 31e219e1794
Successfully tagged localhost/my-image:functional-20210708231854-257783
31e219e1794c9fbb7cb02768d9dee684e379ac22253683b83d32e221f95dc6f7
functional_test.go:367: (dbg) Stderr: out/minikube-linux-arm64 -p functional-20210708231854-257783 image build -t localhost/my-image:functional-20210708231854-257783 testdata/build:
Resolved "busybox" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/busybox:latest...
Getting image source signatures
Copying blob sha256:38cc3b49dbab817c9404b9a301d1f673d4b0c2e3497dbcfbea2be77516679682
Copying config sha256:90441bfaac70995ed0539fcde9e822a6293a6aac2701899520ac5d249c074414
Writing manifest to image destination
Storing signatures
functional_test.go:321: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210708231854-257783 -- sudo crictl inspecti localhost/my-image:functional-20210708231854-257783
--- PASS: TestFunctional/parallel/BuildImage (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:403: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:408: (dbg) Stdout: out/minikube-linux-arm64 -p functional-20210708231854-257783 image ls:
localhost/minikube-local-cache-test:functional-20210708231854-257783
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1673: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1673: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo systemctl is-active docker": exit status 1 (349.034052ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1673: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo systemctl is-active containerd"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1673: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210708231854-257783 ssh "sudo systemctl is-active containerd": exit status 1 (382.068623ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:1902: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:1915: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:1915: (dbg) Done: out/minikube-linux-arm64 -p functional-20210708231854-257783 version -o=json --components: (11.222570912s)
--- PASS: TestFunctional/parallel/Version/components (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1764: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1764: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1764: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210708231854-257783 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1202: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1207: Took "301.688073ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1221: Took "55.6318ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1252: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1257: Took "297.359623ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1270: Took "55.33938ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-arm64 -p functional-20210708231854-257783 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210708231854-257783 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.109.212.255 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-arm64 -p functional-20210708231854-257783 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:182: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210708231854-257783
functional_test.go:187: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210708231854-257783
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210708231854-257783
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210708231854-257783
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-20210708232416-257783 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-20210708232416-257783 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.284929ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210708232416-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"56e1d34a-90d7-495a-a4d4-db000d36078e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=11942"},"datacontenttype":"application/json","id":"03f4a1c0-605e-404a-a762-84ee53274e1f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig"},"datacontenttype":"application/json","id":"cc6e5d6c-f464-4c21-b2b4-78b32c6e5d68","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube"},"datacontenttype":"application/json","id":"86848624-250b-411b-b383-a013d477a340","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"22bc7526-5109-4004-b4f5-74c2275493af","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"8cd2602b-40a6-4bae-a696-029c2372727b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210708232416-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-20210708232416-257783
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (56.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210708232417-257783 --network=
E0708 23:25:04.599783  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210708232417-257783 --network=: (54.514661004s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210708232417-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210708232417-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210708232417-257783: (2.417370646s)
--- PASS: TestKicCustomNetwork/create_custom_network (56.97s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (45.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210708232514-257783 --network=bridge
E0708 23:25:32.275780  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210708232514-257783 --network=bridge: (42.845616497s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210708232514-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210708232514-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210708232514-257783: (2.288064478s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (45.18s)

                                                
                                    
x
+
TestKicExistingNetwork (46.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-20210708232559-257783 --network=existing-network
E0708 23:26:36.692143  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:36.697496  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:36.707673  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:36.727917  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:36.768148  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:36.848382  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:37.008708  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:37.329192  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:37.969975  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:39.250942  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:41.811121  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-20210708232559-257783 --network=existing-network: (43.410772542s)
helpers_test.go:176: Cleaning up "existing-network-20210708232559-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-20210708232559-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-20210708232559-257783: (2.457538795s)
--- PASS: TestKicExistingNetwork (46.18s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210708232645-257783 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0708 23:26:46.931937  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:26:57.172099  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:27:17.652516  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:27:58.613375  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210708232645-257783 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.503534353s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- rollout status deployment/busybox: (2.481408746s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-h75fp -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-qgrsm -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-h75fp -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-qgrsm -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-h75fp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-qgrsm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-h75fp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 ssh -p multinode-20210708232645-257783 "ip -4 -br -o a s eth0 | tr -s ' ' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210708232645-257783 -- exec busybox-84b6686758-qgrsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 ssh -p multinode-20210708232645-257783 "ip -4 -br -o a s eth0 | tr -s ' ' | cut -d' ' -f3"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.13s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210708232645-257783 -v 3 --alsologtostderr
E0708 23:29:20.533549  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-20210708232645-257783 -v 3 --alsologtostderr: (29.563419849s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.27s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --output json --alsologtostderr
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 cp testdata/cp-test.txt multinode-20210708232645-257783-m02:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 ssh -n multinode-20210708232645-257783-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 cp testdata/cp-test.txt multinode-20210708232645-257783-m03:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 ssh -n multinode-20210708232645-257783-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210708232645-257783 node stop m03: (1.284807135s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210708232645-257783 status: exit status 7 (594.439754ms)

                                                
                                                
-- stdout --
	multinode-20210708232645-257783
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210708232645-257783-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210708232645-257783-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr: exit status 7 (559.547734ms)

                                                
                                                
-- stdout --
	multinode-20210708232645-257783
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210708232645-257783-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210708232645-257783-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:29:45.396770  315820 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:29:45.397218  315820 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:29:45.397228  315820 out.go:299] Setting ErrFile to fd 2...
	I0708 23:29:45.397232  315820 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:29:45.397432  315820 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:29:45.397643  315820 out.go:293] Setting JSON to false
	I0708 23:29:45.397676  315820 mustload.go:65] Loading cluster: multinode-20210708232645-257783
	I0708 23:29:45.398242  315820 status.go:253] checking status of multinode-20210708232645-257783 ...
	I0708 23:29:45.398934  315820 cli_runner.go:115] Run: docker container inspect multinode-20210708232645-257783 --format={{.State.Status}}
	I0708 23:29:45.437025  315820 status.go:328] multinode-20210708232645-257783 host status = "Running" (err=<nil>)
	I0708 23:29:45.437043  315820 host.go:66] Checking if "multinode-20210708232645-257783" exists ...
	I0708 23:29:45.437331  315820 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210708232645-257783
	I0708 23:29:45.473193  315820 host.go:66] Checking if "multinode-20210708232645-257783" exists ...
	I0708 23:29:45.473477  315820 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:29:45.473529  315820 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210708232645-257783
	I0708 23:29:45.516202  315820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49537 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/multinode-20210708232645-257783/id_rsa Username:docker}
	I0708 23:29:45.619418  315820 ssh_runner.go:149] Run: systemctl --version
	I0708 23:29:45.622430  315820 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:29:45.631211  315820 kubeconfig.go:93] found "multinode-20210708232645-257783" server: "https://192.168.49.2:8443"
	I0708 23:29:45.631254  315820 api_server.go:164] Checking apiserver status ...
	I0708 23:29:45.631291  315820 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0708 23:29:45.642926  315820 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1277/cgroup
	I0708 23:29:45.648733  315820 api_server.go:180] apiserver freezer: "11:freezer:/docker/3831ea90b8a54ccf3a352471a98cde55d350fa3fe108258d9f498fa5b01c2eef/system.slice/crio-af13062ed00f4eb7e5598267c4ee73e6b1b70b5daf96eaeaad1c29cf5d784a63.scope"
	I0708 23:29:45.648776  315820 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/3831ea90b8a54ccf3a352471a98cde55d350fa3fe108258d9f498fa5b01c2eef/system.slice/crio-af13062ed00f4eb7e5598267c4ee73e6b1b70b5daf96eaeaad1c29cf5d784a63.scope/freezer.state
	I0708 23:29:45.654082  315820 api_server.go:202] freezer state: "THAWED"
	I0708 23:29:45.654106  315820 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0708 23:29:45.662528  315820 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0708 23:29:45.662546  315820 status.go:419] multinode-20210708232645-257783 apiserver status = Running (err=<nil>)
	I0708 23:29:45.662554  315820 status.go:255] multinode-20210708232645-257783 status: &{Name:multinode-20210708232645-257783 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 23:29:45.662572  315820 status.go:253] checking status of multinode-20210708232645-257783-m02 ...
	I0708 23:29:45.662843  315820 cli_runner.go:115] Run: docker container inspect multinode-20210708232645-257783-m02 --format={{.State.Status}}
	I0708 23:29:45.699359  315820 status.go:328] multinode-20210708232645-257783-m02 host status = "Running" (err=<nil>)
	I0708 23:29:45.699378  315820 host.go:66] Checking if "multinode-20210708232645-257783-m02" exists ...
	I0708 23:29:45.699651  315820 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210708232645-257783-m02
	I0708 23:29:45.734465  315820 host.go:66] Checking if "multinode-20210708232645-257783-m02" exists ...
	I0708 23:29:45.734761  315820 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0708 23:29:45.734800  315820 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210708232645-257783-m02
	I0708 23:29:45.776055  315820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49542 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/machines/multinode-20210708232645-257783-m02/id_rsa Username:docker}
	I0708 23:29:45.855373  315820 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0708 23:29:45.863336  315820 status.go:255] multinode-20210708232645-257783-m02 status: &{Name:multinode-20210708232645-257783-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0708 23:29:45.863384  315820 status.go:253] checking status of multinode-20210708232645-257783-m03 ...
	I0708 23:29:45.863685  315820 cli_runner.go:115] Run: docker container inspect multinode-20210708232645-257783-m03 --format={{.State.Status}}
	I0708 23:29:45.900890  315820 status.go:328] multinode-20210708232645-257783-m03 host status = "Stopped" (err=<nil>)
	I0708 23:29:45.900905  315820 status.go:341] host is not running, skipping remaining checks
	I0708 23:29:45.900909  315820 status.go:255] multinode-20210708232645-257783-m03 status: &{Name:multinode-20210708232645-257783-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 node start m03 --alsologtostderr
E0708 23:30:04.594627  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210708232645-257783 node start m03 --alsologtostderr: (36.235008506s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (153.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210708232645-257783
multinode_test.go:271: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-20210708232645-257783
multinode_test.go:271: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-20210708232645-257783: (41.497356695s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210708232645-257783 --wait=true -v=8 --alsologtostderr
E0708 23:31:36.692677  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:32:04.374178  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210708232645-257783 --wait=true -v=8 --alsologtostderr: (1m52.236490048s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210708232645-257783
--- PASS: TestMultiNode/serial/RestartKeepsNodes (153.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210708232645-257783 node delete m03: (4.704947512s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210708232645-257783 stop: (40.237628104s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210708232645-257783 status: exit status 7 (123.77114ms)

                                                
                                                
-- stdout --
	multinode-20210708232645-257783
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210708232645-257783-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr: exit status 7 (121.416832ms)

                                                
                                                
-- stdout --
	multinode-20210708232645-257783
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210708232645-257783-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:33:42.667596  326195 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:33:42.667690  326195 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:33:42.667723  326195 out.go:299] Setting ErrFile to fd 2...
	I0708 23:33:42.667732  326195 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:33:42.667868  326195 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:33:42.668033  326195 out.go:293] Setting JSON to false
	I0708 23:33:42.668060  326195 mustload.go:65] Loading cluster: multinode-20210708232645-257783
	I0708 23:33:42.668790  326195 status.go:253] checking status of multinode-20210708232645-257783 ...
	I0708 23:33:42.669507  326195 cli_runner.go:115] Run: docker container inspect multinode-20210708232645-257783 --format={{.State.Status}}
	I0708 23:33:42.704836  326195 status.go:328] multinode-20210708232645-257783 host status = "Stopped" (err=<nil>)
	I0708 23:33:42.704853  326195 status.go:341] host is not running, skipping remaining checks
	I0708 23:33:42.704858  326195 status.go:255] multinode-20210708232645-257783 status: &{Name:multinode-20210708232645-257783 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0708 23:33:42.704884  326195 status.go:253] checking status of multinode-20210708232645-257783-m02 ...
	I0708 23:33:42.705169  326195 cli_runner.go:115] Run: docker container inspect multinode-20210708232645-257783-m02 --format={{.State.Status}}
	I0708 23:33:42.738020  326195 status.go:328] multinode-20210708232645-257783-m02 host status = "Stopped" (err=<nil>)
	I0708 23:33:42.738036  326195 status.go:341] host is not running, skipping remaining checks
	I0708 23:33:42.738042  326195 status.go:255] multinode-20210708232645-257783-m02 status: &{Name:multinode-20210708232645-257783-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (101.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210708232645-257783 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0708 23:35:04.592881  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210708232645-257783 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.388418291s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210708232645-257783 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (101.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210708232645-257783
multinode_test.go:433: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210708232645-257783-m02 --driver=docker  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-20210708232645-257783-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.045026ms)

                                                
                                                
-- stdout --
	* [multinode-20210708232645-257783-m02] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210708232645-257783-m02' is duplicated with machine name 'multinode-20210708232645-257783-m02' in profile 'multinode-20210708232645-257783'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210708232645-257783-m03 --driver=docker  --container-runtime=crio
multinode_test.go:441: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210708232645-257783-m03 --driver=docker  --container-runtime=crio: (50.59977145s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210708232645-257783
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-20210708232645-257783: exit status 80 (294.836014ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210708232645-257783
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210708232645-257783-m03 already exists in multinode-20210708232645-257783-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-20210708232645-257783-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-20210708232645-257783-m03: (3.181306689s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.20s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (70.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:126: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-20210708233807-257783 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:126: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-20210708233807-257783 --memory=2048 --driver=docker  --container-runtime=crio: (44.6170367s)
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210708233807-257783 --schedule 5m
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783
scheduled_stop_test.go:167: signal error was:  <nil>
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210708233807-257783 --schedule 8s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210708233807-257783 --cancel-scheduled
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210708233807-257783
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210708233807-257783 --schedule 5s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210708233807-257783
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210708233807-257783 -n scheduled-stop-20210708233807-257783: exit status 7 (105.003886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:174: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210708233807-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-20210708233807-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-20210708233807-257783: (5.355179403s)
--- PASS: TestScheduledStopUnix (70.39s)

                                                
                                    
x
+
TestInsufficientStorage (20.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-20210708233917-257783 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-20210708233917-257783 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (13.818113862s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210708233917-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"b5749c59-b7b5-41ea-8131-fc27064c942b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=11942"},"datacontenttype":"application/json","id":"97b493c2-fbd6-4ebe-beea-bd048abc0113","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig"},"datacontenttype":"application/json","id":"55962a5b-2abf-42aa-972f-19b8598d77b9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube"},"datacontenttype":"application/json","id":"352924e3-3f83-48b1-9827-8f6002d3642e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"8bff66e3-c6c0-4923-9527-e8255b27c813","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"bfb9fd3e-2aa7-4845-b62b-98cb5a0efa6b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"7c18a61a-a0a3-4c02-a2cc-2203d6c7dcc0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"58392ad9-a3c6-4ce8-8fde-c0909ec6c2d9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"1a55aa95-f0e9-4ceb-ba0b-8cdc39628080","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210708233917-257783 in cluster insufficient-storage-20210708233917-257783","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"b6d82769-1f8f-4467-a3bc-5ece2cd15afa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"9b01e238-c64f-4020-a117-4a19cce78631","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"9d4bd9bd-7162-49a1-a83c-443a92507827","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"7d005fc5-b9af-4b83-8a0c-156b4ec0602a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210708233917-257783 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210708233917-257783 --output=json --layout=cluster: exit status 7 (275.861284ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210708233917-257783","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210708233917-257783","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 23:39:32.003523  362522 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210708233917-257783" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210708233917-257783 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210708233917-257783 --output=json --layout=cluster: exit status 7 (274.508411ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210708233917-257783","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210708233917-257783","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0708 23:39:32.279573  362559 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210708233917-257783" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	E0708 23:39:32.287610  362559 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/insufficient-storage-20210708233917-257783/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210708233917-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-20210708233917-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-20210708233917-257783: (6.265348396s)
--- PASS: TestInsufficientStorage (20.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (95.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:124: (dbg) Run:  /tmp/minikube-v1.17.0.113607905.exe start -p running-upgrade-20210708234550-257783 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:124: (dbg) Done: /tmp/minikube-v1.17.0.113607905.exe start -p running-upgrade-20210708234550-257783 --memory=2200 --vm-driver=docker  --container-runtime=crio: (54.540873187s)
version_upgrade_test.go:134: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-20210708234550-257783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:134: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-20210708234550-257783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.763769528s)
helpers_test.go:176: Cleaning up "running-upgrade-20210708234550-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-20210708234550-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-20210708234550-257783: (3.712265249s)
--- PASS: TestRunningBinaryUpgrade (95.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (130.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0708 23:42:59.735799  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.921380821s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210708234222-257783
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210708234222-257783: (2.205277256s)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-20210708234222-257783 status --format={{.Host}}
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-20210708234222-257783 status --format={{.Host}}: exit status 7 (100.624571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:252: status error: exit status 7 (may be ok)
version_upgrade_test.go:261: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.22.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:261: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.22.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.218803419s)
version_upgrade_test.go:266: (dbg) Run:  kubectl --context kubernetes-upgrade-20210708234222-257783 version --output=json
version_upgrade_test.go:285: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio: exit status 106 (96.215551ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210708234222-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-beta.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210708234222-257783
	    minikube start -p kubernetes-upgrade-20210708234222-257783 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210708234222-2577832 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210708234222-257783 --kubernetes-version=v1.22.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:291: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:293: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.22.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:293: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210708234222-257783 --memory=2200 --kubernetes-version=v1.22.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.607919703s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210708234222-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210708234222-257783
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210708234222-257783: (3.074779813s)
--- PASS: TestKubernetesUpgrade (130.34s)

                                                
                                    
x
+
TestPause/serial/Start (308.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210708233938-257783 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210708233938-257783 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (5m8.536813202s)
--- PASS: TestPause/serial/Start (308.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-arm64 start -p false-20210708233939-257783 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-20210708233939-257783 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (298.625939ms)

                                                
                                                
-- stdout --
	* [false-20210708233939-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=11942
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0708 23:39:39.270713  363039 out.go:286] Setting OutFile to fd 1 ...
	I0708 23:39:39.270859  363039 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:39:39.270878  363039 out.go:299] Setting ErrFile to fd 2...
	I0708 23:39:39.270891  363039 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0708 23:39:39.271033  363039 root.go:312] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/bin
	I0708 23:39:39.271328  363039 out.go:293] Setting JSON to false
	I0708 23:39:39.272249  363039 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8528,"bootTime":1625779051,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1038-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0708 23:39:39.272331  363039 start.go:121] virtualization:  
	I0708 23:39:39.277557  363039 out.go:165] * [false-20210708233939-257783] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0708 23:39:39.280087  363039 out.go:165]   - MINIKUBE_LOCATION=11942
	I0708 23:39:39.277727  363039 notify.go:169] Checking for updates...
	I0708 23:39:39.282211  363039 out.go:165]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/kubeconfig
	I0708 23:39:39.284041  363039 out.go:165]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube
	I0708 23:39:39.286602  363039 out.go:165]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0708 23:39:39.287085  363039 driver.go:335] Setting default libvirt URI to qemu:///system
	I0708 23:39:39.374686  363039 docker.go:132] docker version: linux-20.10.7
	I0708 23:39:39.374777  363039 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0708 23:39:39.500513  363039 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-07-08 23:39:39.429323323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1038-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8227766272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLi
cense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
	I0708 23:39:39.500610  363039 docker.go:244] overlay module found
	I0708 23:39:39.502882  363039 out.go:165] * Using the docker driver based on user configuration
	I0708 23:39:39.502898  363039 start.go:278] selected driver: docker
	I0708 23:39:39.502903  363039 start.go:751] validating driver "docker" against <nil>
	I0708 23:39:39.502918  363039 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0708 23:39:39.502956  363039 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0708 23:39:39.502970  363039 out.go:230] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0708 23:39:39.515764  363039 out.go:165]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0708 23:39:39.518805  363039 out.go:165] 
	W0708 23:39:39.518883  363039 out.go:230] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0708 23:39:39.521198  363039 out.go:165] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210708233939-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-20210708233939-257783
--- PASS: TestNetworkPlugins/group/false (0.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210708233938-257783 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210708233938-257783 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.712083775s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.73s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-20210708233938-257783 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-20210708233938-257783 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-arm64 delete -p pause-20210708233938-257783 --alsologtostderr -v=5: (2.613459085s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210708233938-257783
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210708233938-257783: exit status 1 (32.972393ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210708233938-257783

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210708234726-257783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-20210708234726-257783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (2m3.411751071s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-20210708234510-257783
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-20210708234510-257783: (1.052689979s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210708234745-257783 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210708234745-257783 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0: (1m53.031488926s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210708234726-257783 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [22fc6caf-e047-11eb-9af2-0242c6678d73] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [22fc6caf-e047-11eb-9af2-0242c6678d73] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.022515149s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210708234726-257783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-20210708234726-257783 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210708234726-257783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210708234745-257783 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [77f47ed5-7541-44d8-9a3c-ce0b0dccbcb5] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:340: "busybox" [77f47ed5-7541-44d8-9a3c-ce0b0dccbcb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [77f47ed5-7541-44d8-9a3c-ce0b0dccbcb5] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.034656438s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210708234745-257783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-20210708234726-257783 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-20210708234726-257783 --alsologtostderr -v=3: (20.420618483s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-20210708234745-257783 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210708234745-257783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-20210708234745-257783 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-20210708234745-257783 --alsologtostderr -v=3: (20.304816204s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783: exit status 7 (94.024232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-20210708234726-257783 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (653.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210708234726-257783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0
E0708 23:50:04.592566  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-20210708234726-257783 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (10m52.693252981s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (653.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783: exit status 7 (90.739381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-20210708234745-257783 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (363.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210708234745-257783 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0
E0708 23:51:36.692459  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:53:07.636015  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0708 23:55:04.592350  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210708234745-257783 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0: (6m2.743627338s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (363.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-jz9t2" [e1c2497b-76e3-43e4-ac60-3d1096659758] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-jz9t2" [e1c2497b-76e3-43e4-ac60-3d1096659758] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.044590019s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-jz9t2" [e1c2497b-76e3-43e4-ac60-3d1096659758] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006110321s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210708234745-257783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-20210708234745-257783 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-20210708234745-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783: exit status 2 (303.945418ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783: exit status 2 (351.48716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-20210708234745-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210708234745-257783 -n no-preload-20210708234745-257783
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210708235636-257783 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2
E0708 23:56:36.691744  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210708235636-257783 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2: (1m40.483506073s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210708235636-257783 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [c77868a1-527b-4c98-a48f-fe3fb66c1c81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [c77868a1-527b-4c98-a48f-fe3fb66c1c81] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.025486569s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210708235636-257783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-20210708235636-257783 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210708235636-257783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-20210708235636-257783 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-20210708235636-257783 --alsologtostderr -v=3: (20.280547343s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783: exit status 7 (87.258894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-20210708235636-257783 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (364.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210708235636-257783 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2
E0708 23:59:38.895440  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:38.900808  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:38.911010  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:38.931187  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:38.971377  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:39.051569  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:39.211888  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:39.532749  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:39.736000  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0708 23:59:40.173868  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:41.454698  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:44.015450  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:49.136326  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0708 23:59:59.377240  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0709 00:00:04.592358  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0709 00:00:19.858125  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210708235636-257783 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2: (6m3.795794716s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (364.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-5d8978d65d-bfbtn" [09c14a20-e048-11eb-8c7e-0242c0a83a02] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017408923s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-5d8978d65d-bfbtn" [09c14a20-e048-11eb-8c7e-0242c0a83a02] Running
E0709 00:01:00.819197  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004574731s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210708234726-257783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-20210708234726-257783 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-20210708234726-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783: exit status 2 (315.899046ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783: exit status 2 (318.545048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-20210708234726-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210708234726-257783 -n old-k8s-version-20210708234726-257783
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (108.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210709000109-257783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2
E0709 00:01:36.691828  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
E0709 00:02:22.739370  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210709000109-257783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2: (1m48.76925978s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (108.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210709000109-257783 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [bdfe754c-2c07-45e1-b866-9275864e0dad] Pending
helpers_test.go:340: "busybox" [bdfe754c-2c07-45e1-b866-9275864e0dad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [bdfe754c-2c07-45e1-b866-9275864e0dad] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.026824721s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210709000109-257783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-different-port-20210709000109-257783 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210709000109-257783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-different-port-20210709000109-257783 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-different-port-20210709000109-257783 --alsologtostderr -v=3: (20.364281445s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783: exit status 7 (98.696498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-different-port-20210709000109-257783 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (360.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210709000109-257783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2
E0709 00:04:29.938666  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:29.943872  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:29.954044  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:29.974240  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:30.014499  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:30.094800  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:30.255064  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:30.575493  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:31.216416  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:32.497364  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:35.058211  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:38.895833  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0709 00:04:40.179211  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:04:50.420275  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210709000109-257783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.2: (6m0.196580145s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (360.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-gtn8k" [421704d6-28d6-4654-8d08-24a2c3e311fc] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018323897s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-gtn8k" [421704d6-28d6-4654-8d08-24a2c3e311fc] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005501699s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210708235636-257783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-20210708235636-257783 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-20210708235636-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783: exit status 2 (309.531022ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783: exit status 2 (303.29648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-20210708235636-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210708235636-257783 -n embed-certs-20210708235636-257783
E0709 00:05:04.592177  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210709000508-257783 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0
E0709 00:05:10.901126  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:05:51.861302  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210709000508-257783 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0: (1m15.671600376s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-20210709000508-257783 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-20210709000508-257783 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-20210709000508-257783 --alsologtostderr -v=3: (1.413257312s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783: exit status 7 (93.312938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-20210709000508-257783 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210709000508-257783 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0
E0709 00:06:36.691774  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210709000508-257783 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-beta.0: (27.218934584s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-20210709000508-257783 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-20210709000508-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783: exit status 2 (307.30716ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783: exit status 2 (311.006396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-20210709000508-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210709000508-257783 -n newest-cni-20210709000508-257783
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (109.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p auto-20210708233938-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio
E0709 00:07:13.782149  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p auto-20210708233938-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio: (1m49.390374756s)
--- PASS: TestNetworkPlugins/group/auto/Start (109.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-20210708233938-257783 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210708233938-257783 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-7hfw8" [15e71f45-4e5e-4dae-b3ac-2cf4c7040b19] Pending
helpers_test.go:340: "netcat-66fbc655d5-7hfw8" [15e71f45-4e5e-4dae-b3ac-2cf4c7040b19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-7hfw8" [15e71f45-4e5e-4dae-b3ac-2cf4c7040b19] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006038136s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210708233938-257783 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210708233938-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210708233938-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (90.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p custom-weave-20210708233940-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio
E0709 00:09:29.937938  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p custom-weave-20210708233940-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio: (1m30.75265579s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (90.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-rwmkp" [0231094e-bea6-4be8-b960-292c93e8ec8c] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.045311787s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-rwmkp" [0231094e-bea6-4be8-b960-292c93e8ec8c] Running
E0709 00:09:38.895396  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008662346s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210709000109-257783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-different-port-20210709000109-257783 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-different-port-20210709000109-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783: exit status 2 (300.123609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783: exit status 2 (316.622896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-different-port-20210709000109-257783 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210709000109-257783 -n default-k8s-different-port-20210709000109-257783
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-weave-20210708233940-257783 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210708233940-257783 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-vq74l" [6527ca5f-04e1-42e4-91c3-25232eb42fd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-vq74l" [6527ca5f-04e1-42e4-91c3-25232eb42fd0] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 11.006394154s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p calico-20210708233940-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio
E0709 00:11:36.692059  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p calico-20210708233940-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio: (1m30.513303288s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:340: "calico-node-g4s2s" [282437bf-be25-4f42-bf78-da77f2041f18] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026386341s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-20210708233940-257783 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210708233940-257783 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-pgllp" [080aa4b8-2c25-4691-b4ef-a8a3563a0fd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-pgllp" [080aa4b8-2c25-4691-b4ef-a8a3563a0fd8] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005744002s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210708233940-257783 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210708233940-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210708233940-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-20210708233938-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0709 00:12:59.138872  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.144118  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.154293  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.174466  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.214664  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.294776  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.454988  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:12:59.775962  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:00.416111  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:01.696568  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:04.256729  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:09.377752  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:19.618006  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:40.099106  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:13:50.026330  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.031714  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.041903  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.062176  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.102361  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.182755  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.343025  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:50.663493  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:51.304316  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:52.584487  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:13:55.145397  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:14:00.266077  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:14:10.506392  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:14:21.060073  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:14:29.938930  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/old-k8s-version-20210708234726-257783/client.crt: no such file or directory
E0709 00:14:30.986909  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-20210708233938-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m52.448215878s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-20210708233938-257783 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210708233938-257783 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-4lbm4" [bebb318d-09d3-469e-9836-a0366818654c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0709 00:14:38.895318  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
helpers_test.go:340: "netcat-66fbc655d5-4lbm4" [bebb318d-09d3-469e-9836-a0366818654c] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005116889s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210708233938-257783 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210708233938-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210708233938-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-20210708233939-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio
E0709 00:15:04.592336  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/addons-20210708230204-257783/client.crt: no such file or directory
E0709 00:15:11.947066  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:15:36.500563  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:36.505861  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:36.516061  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:36.536242  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:36.576508  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:36.656767  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:36.816907  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:37.137596  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:37.778187  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:39.058707  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:41.619580  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:42.980528  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:15:46.740073  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:15:56.980535  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:16:01.941439  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/no-preload-20210708234745-257783/client.crt: no such file or directory
E0709 00:16:17.461371  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:16:19.736544  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-20210708233939-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio: (1m41.004005527s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:340: "kindnet-dhv52" [c1a85e84-19ed-4cf7-a6aa-1351df950436] Running
E0709 00:16:33.868018  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/auto-20210708233938-257783/client.crt: no such file or directory
E0709 00:16:36.692435  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/functional-20210708231854-257783/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019990741s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-20210708233939-257783 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210708233939-257783 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-44nvk" [6fc69162-b460-4caf-b65a-b113775fc965] Pending
helpers_test.go:340: "netcat-66fbc655d5-44nvk" [6fc69162-b460-4caf-b65a-b113775fc965] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-44nvk" [6fc69162-b460-4caf-b65a-b113775fc965] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.008726921s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210708233939-257783 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210708233939-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210708233939-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-20210708233938-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio
E0709 00:16:58.421566  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.649956  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.655271  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.665460  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.685648  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.725838  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.806042  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:21.966335  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:22.286861  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:22.927726  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:24.208849  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:26.769312  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:31.889950  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:42.130557  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:17:59.138630  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
E0709 00:18:02.611171  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
E0709 00:18:20.341723  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/custom-weave-20210708233940-257783/client.crt: no such file or directory
E0709 00:18:26.821366  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/default-k8s-different-port-20210709000109-257783/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p bridge-20210708233938-257783 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio: (1m42.748330337s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-20210708233938-257783 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210708233938-257783 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-l4bbm" [038bccdd-43d1-4abf-8b90-a02c079bcfaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-l4bbm" [038bccdd-43d1-4abf-8b90-a02c079bcfaa] Running
E0709 00:18:43.571865  257783 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-crio-11942-251863-7a38b95f8e8296ab5337ba84b22cfa25f776c266/.minikube/profiles/calico-20210708233940-257783/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005711276s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210708233938-257783 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210708233938-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210708233938-257783 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (30/256)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.2/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.2/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.2/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-beta.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-beta.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-beta.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (14.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-20210708230149-257783 --force --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-arm64 start --download-only -p download-docker-20210708230149-257783 --force --alsologtostderr --driver=docker  --container-runtime=crio: (14.154262429s)
aaa_download_only_test.go:238: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-20210708230149-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-20210708230149-257783
--- SKIP: TestDownloadOnlyKic (14.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:46: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1503: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:429: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:489: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:36: skipping TestPreload - not yet supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210709000109-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-20210709000109-257783
--- SKIP: TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210708233938-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-20210708233938-257783
--- SKIP: TestNetworkPlugins/group/kubenet (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210708233938-257783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p flannel-20210708233938-257783
--- SKIP: TestNetworkPlugins/group/flannel (0.31s)

                                                
                                    
Copied to clipboard